Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Good morning, you're listening to Breakfast Bites, and I'm Felicia King.
In today's show, I'm going to talk about artificial intelligence.
And no, I'm not going to talk about it the same way everybody else is.
There are some interesting insights I'll share with you, and I'm not going to
be making an argument that you should go all gung-ho on AI.
(00:20):
In fact, I'm going to be making an argument that you should be very cautiously
being thoughtful about how you engage in risk management while using this technology.
It's very important that you understand the deep implications of using AI.
What you can do to manage or limit the risk of using AI,
(00:46):
and some good ideas about reasonably useful ways to use it versus other areas
where you should just probably abstain.
If not now but probably permanently
so let's get started like i
said it's going to be a little different than probably everybody else's take
on the on ai i generally don't
(01:09):
run like a lemming off of a cliff when we're talking about technology there
are certainly a lot of people that are like fear of missing out you know otherwise
known as FOMO and i don't engage in FOMO i am generally really suspicious of
anything and everything.
And the more it is promoted, the more popularized it is,
(01:32):
the more you need to be very seriously asking the question of qui bono and which
is of course, you know, who benefits.
And when we consider the convergence of the technocratic approach.
All of the technology companies are basically advancing technocracy.
(01:55):
And technocracy is inherently anti-human and anti-democracy.
It is very much a centralization control mechanism.
And so I'm not going to say that AI is 100% all bad.
I'm going to say that you need to understand it.
And there is in some ways
(02:17):
you could think about it in terms of like asymmetrical warfare
you know you as an individual or even as
a small business you don't have the resources to
be able to construct your own ai instances that you're running effectively as
a closed system which is one of the only ways to truly deeply use it while mitigating risk.
(02:44):
It's just too economically challenging at this point in time to be able to run
that kind of a closed system.
You'd have to have quite a lot of compute power yourself, and you'd have to
have the technical skill to be able to run those instances,
those closed AI instances on devices that you own, right?
(03:08):
That you don't have counterparty risk for.
So you have to have a lot of very significant, not only infrastructure,
but talent capabilities, and probably no clear way to monetize that.
I recently came back from an IT service provider convention,
(03:31):
and there was a few hundred owners of IT service providers who were there.
Approximately maybe 60 of us got together and had a very intensive discussion
about this particular topic. Okay.
You know, there's a tremendous amount of risk associated with counterparties,
as well as your own employees,
(03:52):
putting data into an AI system that isn't controllable by the person who's interacting
with it with regards to how that data is collected and leaked and re-leveraged elsewhere.
And this is, you know, there's another effect there also on the personal side.
(04:16):
You know very clear thing you should always abstain from
and hopefully this is obvious to everyone is if
you're engaging in like an ai chat bot sort of thing whether it's you know open
ai chat gpt whatever the heck it happens to be i mean heck it could be some
of the chat stuff that is ai now integrated with bing or you know any place
(04:37):
else like that if it's basically an ai chat chatbot,
you need to assume that there's literally no security around it.
You have to assume that whatever data you're putting in there,
you're literally willing now to send that data to the world.
(04:58):
Because that data will absolutely, whatever you put in there,
will be ingested into the AI platform,
and it will be used to populate the model with more data, and it could be easily leaked.
In fact, there's been just extensive examples of leaks.
(05:19):
There's actually been an attorney who was sanctioned because he used AI to do
sensitive work for an actual real case.
And the stuff that the AI came up with was just wrong.
(05:40):
To put it bluntly, it was just wrong.
And then this attorney didn't go and double check everything that the AI came up with.
And he used that to represent his client, and turns out that the whole thing
was a giant mess and a whole bunch of wrong.
(06:02):
And next thing you know, the judge is literally fining him and sanctioning him.
So you have to be really aware of these implications.
I mean, it could be very professionally adverse. You should not be using it
in terms of financial things, private information,
(06:22):
looking at the medical industry, certainly the legal industry,
anything that you think is intellectual property, anything that you think should
be confidential should remain confidential.
I think something that you could use AI for without too much concern would be
(06:44):
something like marketing content
or sales promotional content that you ultimately intend to make public.
And that sort of thing can be quite useful.
It can be helpful in terms of helping you to maybe revise or improve the language
(07:05):
of a marketing-related email.
What I have seen, though, so far that I find completely reprehensible is that
the majority of businesses out there are using AI in an exceedingly inappropriate
(07:26):
manner. Just absolutely inappropriate.
And this is all due to their lack of operational maturity. The most organizations,
I mean, I'm going to argue that probably over 80% of organizations of all shapes,
sizes, and flavors, and please do not think that a large company mystically
has their poop in a group.
In fact, I would argue it's a lot easier to take a 15-person company and make
(07:52):
them operationally mature than...
And get them to function in an adult, secure manner consistent with protecting
customer data than it is for a large company.
Large companies really struggle with governance, accountability, and transparency.
(08:12):
They struggle with getting employee behavior to be consistent with policy.
And all companies, other than very small companies with strong leaders,
seem to very much so struggle with policies.
So, I mean, what I've seen just across the board,
(08:32):
not necessarily in the IT service provider space, because I think a lot of IT
service providers are very suspicious of technology and,
you know, organizations like mine are headed by people who are more aware of
the kind of potential bad mojo that could be happening.
(08:55):
We're very concerned about things like counterparty risk.
I know a number of my friends are actually running their own AI closed system internally.
And they've certainly articulated to me the horrendous cost profile of doing
(09:15):
so. So if you're going to do something like that, you have to have a clear understanding
as to exactly how you're going to monetize that.
And then be prepared to hire lawyers to help you come up with that policy for it.
One of the things that we're doing now for clients is that in the next month,
we are going to be releasing for all of our clients who subscribe to our vCISO services.
(09:40):
We're basically for our clients who want to be operationally mature and want to mitigate risk.
We offer vCTO and vCISO services.
So really, I characterize that as vCTO.
You don't have a technology executive, oh, you desperately need one.
It's very difficult, if not impossible, to get operational maturity in an organization
(10:06):
and then all of the economic benefits that come from that, including keeping
more of the money that you make.
This isn't exclusively about driving sales, but absolutely operational maturity
does make you more profitable from a sales perspective.
It will also enable you to be more efficient, Not only in terms of employee
(10:27):
productivity, but just less waste.
So you're able to keep more of the money that you made because your expenses end up going down.
And one of the very, very immature things that I see almost all companies doing,
it's just, it's, it's shockingly bad what the ramifications of it are.
And I'll give you an example of it, is that they think, oh, I'm not going to
(10:52):
talk to my CTO or my CISO because they cost money.
And then they make a $200,000 mistake.
I can't even begin to tell you how many times I could come up with examples
of... I've worked with almost 500 clients over the last 30 years.
And in that timeframe, I could come up with dozens of examples of organizations
(11:16):
that had they consulted with me for less than 10 hours,
they would have not lit at least $200,000 on fire.
So it's like sometimes this poverty mentality is just completely ridiculous that happens.
(11:37):
And that all comes from this paradigm where people are like,
well, we didn't think we needed you.
I mean, I'm literally going to quote some of them. We didn't think we needed you.
Well, I think in this realm of the AI, it's the same thing. You don't have any
clue how to deal with this thing.
And so for the clients that subscribe to our VCTO and VCISO services,
(12:01):
we're literally giving them an AI policy for their business.
And we're also So making available to all of their staff to 15-minute sessions.
AI courses that are designed for risk mitigation, risk management,
(12:22):
and there's quizzes. You have to actually pass the quiz.
You have to get 85% or better on the quiz.
And this gets turned into reports that can be provided for any number of reasons,
such as vendor risk management, supply chain risk management,
(12:44):
compliance purposes,
cyber insurance, E&O insurance.
I mean, just this basically, this is one of those things where by simply having
the right ongoing relationship with your VCTO and VCSO that you are proactively
(13:05):
as a client getting these things that are directly mitigating risk,
because it's taking the right approach with it. It's saying, we need a policy.
Great. Let's have a template policy. And if we want to customize it, we can customize it.
But then beyond that, how are we training our staff to act in accordance with
risk management around AI?
(13:27):
Okay. So that's always going to be how you should take this operationally mature
approach, you have to look at it from an HR management, HR enforcement perspective.
But first, people have to be aware of what the policy is. So you have to have a policy.
And you have to have a technology executive to come up with the policy.
I think it's naive to just assume that you just simply ask corporate counsel for this.
(13:52):
Even in the IT service provider space, there are exceptionally few lawyers who
are actually really adept at understanding the nuances in the IT services industry.
Industry and even for those that specialize in it
their knowledge is not a substitute
(14:14):
for the type of knowledge that a cto or
cso has of course the best of both worlds
is where you have a combined cto cso where they you have a single in a single
individual you have all of that skill so it's i'm not saying don't have your
attorney review things if you want to what i am saying is that expecting your
(14:35):
attorney to come up with these policies,
that's just asking a bit much.
And something that's completely ineffective is the realm where somebody on your
staff goes out to the general world and goes and grabs a template and says,
hey, hey, this is our policy,
which I've actually seen all
(14:56):
kinds of organizations do this with a written information security plan.
And it's hilariously bad. It's not even remotely legally defensible because
it was never customized.
But the other thing is their written information security plan that they got
that was basically a template.
It has no alignment with that actual business, their actual technical controls,
(15:20):
or their actual posture.
So it's basically mendacity.
It's just a giant aisle of lies in theater.
And so when an organization fails to have an ongoing relationship with a CISO,
they do not have any more resources.
(15:42):
A mechanism that just makes it so that they can consume these sorts of services
on what is a financially expected basis.
So the key element here is that as an organizational leader,
you have to look at things and say,
(16:03):
you know, would I rather have a situation with my attorney where I just pay
them, you know, $500 a month, $1,000 a month, And then they just take care of,
you know, what I need up to a certain limit every month.
And, you know, it helps us with our budget planning, but it helps us understand,
you know, that we shouldn't be having this barrier to communications with such
(16:27):
an essential resource. source.
And so it becomes easy for you to budget.
And really, you're going to get
better service when you have an ongoing relationship with your attorney.
There's a lot of organizations that use that approach with their outsourced
CFO or their tax advisor.
They need to be doing the same exact thing from a technology executive perspective as well.
(16:51):
So what I've been seeing is companies Companies are having no policy,
and then they just allow their staff to take all kinds of information and shove it into AI.
And I'm just finding the whole thing to be truly disturbing.
Disturbing if you take a look at like a the
entire staffing industry you know
(17:14):
what do most recruiters do well they're probably in a lot of cases helping the
applicants to you know help them write a resume and improve that so i guess
the first question somebody would have to to ask
themselves is, is there anything in that resume that is confidential?
(17:37):
And if there's not, fine, go ahead and use AI for it, right?
Because I said in the beginning, the risk management approach is you have to
assume that anything that you're putting into some sort of an AI engine is going
to be utilized going forward across the board as part of you're You're feeding it into the AI model,
(17:59):
and you've now given it data that it can then produce later on and publicly disclose elsewhere.
So in the event that the information that you're feeding into it was totally
non-confidential and it was going to be publicly exposed anyways, then go ahead and do it.
(18:23):
But my point is that, you know, I guess you would really need to,
be looking at your business processes and say, hmm, what is the nature of our
relationship with our customers?
Do our customers expect that we're going to be doing that with their data?
(18:44):
Or are they expecting that we're keeping that information confidential and we're
only providing that information to a select number of parties?
And when I mean a select number of parties, oftentimes what you find in
in like a lot of specialty recruiting industries is you
know their benefit is that they have this like like if
(19:07):
you look in technical recruiting for example they collect this list of personnel
that have these backgrounds and these specializations and so forth and then
the idea is that when you go to that technical recruiter they have this database
that they can look at that's you know the people and it's supposed to be their
private database and And then they're supposed to be looking at and saying,
okay, well, I'm going to find somebody that I know has the skill set that would
(19:31):
be the right match for that client.
And then they make that recommendation, right? So I think that that's going
to be a situation that each organization has to evaluate on their own.
They have to look at it in terms of what's the nature of the relationship that
(19:52):
we have with our customers. So in the case of like a staffing company or a recruiter that.
That would be a matter of look at your agreement.
What is the agreement that you have with the people that are coming in and saying,
hey, we want to use you to help us find a job, that type of a thing.
(20:14):
And if there's a situation where the staff at the company did something inconsistent
with the perception of the customer,
then that's where you have challenges.
And this is going to be true in any business of any shape, size, or flavor.
(20:36):
Anytime there is an inconsistency in the understanding between the parties,
that's where you have conflict.
And that's where you have disagreements and the drama levels go up and people get mad,
you know, and then that's just, you know, bad wasted energy is when people get
(20:57):
mad because it turns into unproductive activities.
So I would be encouraging all organizations to be getting their policies together for AI,
having internal discussions with their managers about how much their staff is
currently using or wants to use.
(21:20):
Consult with your technology executive come up
with the policy come up with strategy whatever you're going to do be consistent
about it across the board don't let one department do one thing and another
department do another thing you know be consistent about it and be intentional
about it because if you're not intentional about it then.
(21:43):
The personnel managers in the organization are not able to enforce that policy across all staff.
So like I said, what we're doing for our clients is we're actually giving them
a policy and then we're giving them training.
But this is only available to the clients that have that kind of a relationship with us.
(22:05):
And those are the ones that want to be operationally mature as opposed to,
I don't know, Wild West is kind of the way I would describe it.
Because, you know, you either have governance, accountability and transparency,
or you don't. And if you don't, then everything is pretty haphazard.
It's policy is whatever somebody feels that day.
(22:28):
And that results in a lot of problems, too.
So going back to some of the challenges with AI is it's making it a lot easier
for bad guys to be able to perpetrate scams on private individuals as well as
people who work at organizations.
(22:49):
Organizations and what i've seen as
a very staunch differential is whether
or not an organization has a very very effective training program so we go back
to governance accountability and transparency again if you're going to drive
accountability in the organization that says hey all i want you to act in a risk managed manner.
(23:17):
Utilize the technology that the company is providing you in an effective way.
Then the company needs to put forth what is that effective? What are the boundaries?
And the best way to do that is through regular ongoing training.
And when I say regular ongoing, I think it's weekly. I think the answer is weekly.
(23:39):
I really do. Because I've seen this across across, you know,
thousands of individuals at a large number of organizations,
the thing that is a differentiator is an attitude of weekly training.
And weekly training is very easy to consume as well.
I mean, it's very easy to say, oh, I'm going to do this five minutes this week,
(24:00):
or I'm going to do this 15 minute training this week.
The other thing that I've seen come out of that is not only a risk reduction,
so effective risk management, which drives down costs, But it's also driving
up productivity because you can infuse in that training curricula productivity training.
Oh, my gosh. Oh, this is how we use OneDrive correctly, you know,
(24:22):
or this is how we use Outlook correctly.
And that training can and should be a convergence of like company policy,
recommended strategies,
tips and tricks that are kind of like organizationally agnostic,
meaning, you know, this is how Outlook works for every company in the entire world.
(24:45):
It's kind of more like, how do I use technology?
All the way to, hey, here's how we want you to utilize technology.
This is our data retention policy. This is our data classification policy.
And making that a combination of...
Policy documents, but then documents that somebody could refer to as well as
(25:09):
bite-sized videos and longer training videos and maybe all the way up to like an annual,
maybe up to four 45-minute classes.
And these should all be, preferably, you would have them all be on-demand self-service training,
(25:29):
meaning they could now be incorporated into a regularized onboarding program
and an annual retraining program.
Some of them, not all of them, right? Because stuff changes over time.
But that way, you're getting consistency across the organization.
So the whole thing of being able to avoid scams has gotten harder because of AI.
(25:58):
It's basically lowering the barriers for criminals to be having effective scamming.
It's economically lowering it. It's lowering the barrier from a technological hurdle perspective.
And it's also driving up automation to increase the blast radius of their attacks.
(26:21):
So, you know, if you as an individual or you as a business are not...
Taking the commensurate actions to up your game, then you've got some challenges.
You know, you're, you know, basically the bad guys have declared war on you. What are you doing?
(26:43):
You know, you don't get to just like lay back and go like, oh,
okay, I'm going to do nothing, even though they've declared war on me.
So beyond that now, AI, I think is a digital control mechanism that is advocating for technocracy.
And there's, you know, you can just look at
all of the ways in which it's going
to more even more so than
(27:05):
it already is automate the profiling of
every single human in the entire world and and many of the technical technology
companies the big tech companies have stated that that's their exact objective
is to have a profile for every human on the planet from cradle to grave and
to be able to manage you effectively
(27:28):
through that profiling.
We see it has already happened in China.
And the ultimate end goal result of it, again, already observable in China and
some other countries, is digital control.
So I caution you with adopting it too much.
(27:48):
There's plenty of good resources out there for you to explore further to become more educated on this.
And you could look up some deep fakes, for example, you could look up things
like central bank digital currency, and you could do some more research on your
own with regards to that.
From a business perspective, I would strongly encourage you to give me a contact,
(28:11):
give me a call if you do not already have a really sophisticated,
highly effective, profit-driving, risk-reducing program for operational maturity
in your business because that's what we do.