All Episodes

July 19, 2025 65 mins

The Cybersecurity Today episode revisits a discussion on the risks and implications of AI hosted by Jim Love, with guests Marcel Gagné and John Pinard. They discuss the 'dark side of AI,' covering topics like AI misbehavior, the misuse of AI as a tool, and the importance of data protection in production environments. The conversation delves into whether AI can be conscious and the ethical considerations surrounding its deployment, particularly in highly regulated industries like finance. They emphasize the need for responsible use, critical thinking, and ongoing oversight to mitigate potential risks while capitalizing on AI's benefits. The episode concludes with a call for continued discussion and engagement through various platforms.

00:00 Introduction to Cybersecurity Today
00:33 Exploring the Dark Side of AI
02:31 AI Misbehavior and Security Concerns
07:35 Speculative Risks and Consciousness
26:09 AI in Corporate Settings
31:49 Human Weakness in Security
32:37 Social Engineering Tactics
33:08 Security in Engineering Systems
33:42 AI Data Storage and Security
35:16 AI Data Retrieval Concerns
39:36 Testing Security in Development
41:37 AI in Regulated Industries
43:57 Bias and Decision Making in AI
47:18 Critical Thinking and Debate Skills
55:06 The Role of AI as a Consultant
01:02:21 The Future of AI and Responsibility
01:04:55 Conclusion and Contact Information

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:02):
Welcome to Cybersecurity Today.
In the summertime, we do some newcontent in the summer, but we also
feature some of the most popularprograms and interviews that we've done.
This is a show we did on the dark sideof ai, and for the life of me as I
watch this title every time, I justcan't help thinking about Pink Floyd.

(00:22):
Don't Know Why.
And as Pink Floyd isstill relevant in music.
This show is still relevantgiven a lot of the stories that
have occurred since we aired it.
. Many of you have seenProject Synapse before.
You've seen this discussion withMarcel Gagné, John Pinard, and me as
we talk about AI and how we use itin our personal and our work lives.

(00:45):
And, one of the things that we wantedto talk about, although all of us
are positive about this, AI and itsusage, we're not, we're not what you'd
call doomsters by any stretch of theimagination, but we are people who
go into this with our eyes wide open.
And one of the things that I wanted tohave a discussion about was what I've
called the dark side of AI and Pink Floydfans out there, forgive me, I by the

(01:10):
way, there is no dark side of the moon.
Just in case anybody outthere has been fooled
.Um, back to reality.
We wanted to do somethingon the dark side of AI.
Not so we can fear it.
But so that we can planon how to handle it.
But as I discovered, there's a biggerdiscussion to be had on this, and
one that we should be all having.

(01:31):
There's much more to it than we couldsimply get to it in an hour and a half
or two hours that we spent on this.
So I'd like to invite youto join that discussion.
You can reach me.
Editorial at technewsday.
ca. You can find me on LinkedIn.
If you're watching this on YouTube,you can add your comments, uh,

(01:54):
under the video that's there.
If you check the YouTube notes orif you just go to technewsday.
com or technewsday.
ca and then click the menu for podcasts,you can find the notes from this session.
And when you do that, , you'll see aninvitation to our discord group and
you're welcome to join us there tocontinue this discussion through the week.

(02:17):
, whatever works for you.
Here's the discussion we had.
Hope you find it interesting.
I think there were three.
areas.
One is the A. I. Misbehaving in oneway or another because you could

(02:38):
jailbreak it then the second one isthe A. I. Used as a tool and the A.
I. Can actually be used to do damage.
And the third is if we're gonna operatethis within our production environments,
how we're gonna protect our dataand our processes that was the three
organizing principles I've put together.

(02:58):
Is there anything I'm missing on that?
I do want to spend some time talkingabout the far fetched things, and you can
think of them as far fetched as you want.
One of them dovetails into what yousaid, which is the AI misbehaving.
This idea of the AI trying to breakout or the AI purposely lying or the
AI, purposely deceiving people, whichof course gets us into the science
fiction scenarios of Skynet I guaranteeyou there are people out there who

(03:22):
that is their big thing, talkingabout things like disinformation.
Personally, I think disinformationand misinformation is one of
the greatest dangers becauseyou're able to do that at scale.
Somehow that's one of the ones thatalmost nobody thinks about that is to
me the primary danger and yet the Onethat gets the least attention what
gets the most attention is Skynet,I do want to touch on those things.

(03:43):
I want to talk in the speculativerisks To some degree obviously the
ones that you've mentioned are realand legitimate, but they're not sexy.
They're interesting in terms thatthey're real, but I don't want to
get into a hype thing, I don't mindtalking about those things because
what you're talking about is real.
I think that people need to understand,like, when I was going through this,

(04:04):
I was thinking about it from an, Iwant to implement AI at my company.
And so I think one of the things andit was in that final thought on the one
that I sent, is that AI is a powerfultool, but it's not just plug and play.
You need to look at the security.
You need to look at howare you going to deal with

(04:25):
misinformation or disinformation?
Are you going to, put in specialprocesses that say, if you're using
AI, you need to go through and dothis to verify it as an example.
I totally agree with you, John, that'swhy I said your list really looked
like a list that somebody had prepared.
If I'm in a corporate setting andI want to implement this is all the
things I should do for implementingit so that it's protected, so that

(04:47):
it has all of the aspects thatwe need for corporate setting.
The idea that I was talking about was,hey, there's two ways we can use this.
One is it can be used for evil.
Which is another thing we have tocontrol a lot of the criticism of deep
seek is that it has no guardrails.
It can be used for anything.
The other piece of it is, can weget in and jailbreak these things?

(05:08):
Poisoning and the fact that they lie.
And that really does lead towhere you're going, Marcel.
And I think I'm in a disagreementwith you or maybe I'm not,
but, I believe this stuff is waymore real than we think it is.
I studied complexity theory fora long time and I think we don't
understand, because these are neuralnetworks and prediction machines
they'll never be ableto think or do anything.

(05:30):
I think they're, I thinkthey're potentially have more
capability than we think.
And when we add another layer tothat, some of the science fiction
stuff could very well come true.
I listened to, Moat, and heblew my mind in terms of what he
believes is already happening.
And he's one of those people whobelieves that when he saw the first

(05:50):
indications of the, remember thestory he tells what is the guy's name?
Just, it's Mogadath.
Mogadath.
Mogadath.
Mogadath?
Yeah, but he tells a story of walkingto his office and they had all these
hands that were going to be grippingballs and they and actually really
is a tough thing to get an automatedhand to pick something up because if

(06:14):
it even moves a quarter of an inch.
You lose everything.
So he had all of these machines that weretrying to do this over and over again.
He came by on a Friday night,walks up to his office and suddenly
one of these things is pickingthings up perfectly each time.
Hours later, he comes back.
They're all doing it.
Yep.

(06:35):
Was it conscious?
No.
But it has a behavior that isunpredictable, unexpected and independent.
I think it happens.
I'm going to sound likesome new age or something.
When we feed the birds in themorning the birds are starving here.
I can just go over to Jim's place.
These bunch of freeloaders, Iput this stuff out for the birds.

(06:57):
One bird comes no time for it to go andwalk off to everybody else and go yak yak.
So I know they're nottalking to each other.
That bird flies away.
They start all peeling in.
They are so attuned.
Now, what is it?
I don't know.
But they have incredibleeyesight for one thing.
Yeah, they pick up signals really quickly.

(07:18):
Let's take that to crows.
If you go and attack a crow Thewhole flock will know who you are.
Yep.
All stuff we can't understand, and yet wethink we're going to be able to understand
these incredible machines we built.
I think that's the ultimate in hubris.
Did you guys both watch the interviewthat I posted with Jeff Hinton?

(07:41):
It would have been about a week anda half ago where Jeff Hinton argues
that, Large AI models, frontiermodels, are already conscious.
Yes.
And I agree with him.
One of my great obsessions beyond eventhis and I've read I cannot tell you
how many books on this, listen to howmany podcasts over the years, but I'm
obsessed with the idea of consciousnessand what the self means and so forth.

(08:02):
And of course I haven't helpedmyself by, starting to meditate
six or seven years ago.
The idea of consciousnessis truly an obsession.
Hell, I have written a few blog postsabout it, the idea that, am I the same
person when I wake up in the morningwho went to bed the night before?
Did I go away?
And basically there's a slightlyaltered version of me, that's
coming up the next morning.

(08:22):
I think about this shit all the time.
Like it's one of those things that'salways in the back of my mind.
And I think when we start talking aboutthe idea of, is the AI misleading us?
And I said this on, the veryfirst project synapse that we did.
If we start talking about whetherthese things are misleading us,
whether they are, actively lying,trying to bend the rules or cheat

(08:43):
in order to achieve whatever goal itis, we gave them, then you either.
Attribute the idea that they are tosome degree conscious agents deciding
what it is they're going to do, orthey're not, in which case they're not
cheating, they're not lying, they'rejust following a set of instructions
within a set of guidelines, and we justhaven't defined the problem sufficiently.

(09:06):
So it's either one or the other,and I tend to lean heavier and
heavier these days towards the ideathat yes, there is some kind of
an emergent consciousness there.
It's not the same as us, but itis definitely an intelligence
with its own goals internally.
Before you get into the car anddrive to Toronto, because you've got

(09:26):
a meeting or something like that.
It's not like you went.
I'm thinking that I should go toToronto for a meeting that I don't
know anything about next week.
Because I haven't heard a phone call about
you wait until there is a goal putin front of You need to be at this
meeting, you know in a couple of daysthat's when you put the plan into motion
That's when you start thinking aboutthe contingencies and what you need

(09:47):
to do to achieve that particular goal.
It doesn't just Come out ofthin air something spawns
that goal or gives it to you
there was a Talk we had a few weeks agoabout the AI model started to back itself
up to a different server because itfeared that it was going to be removed.

(10:10):
To me that's exactlywhat I'm talking about.
That to me is some sign of consciousnessthat it's, it sits there going,
Ooh, I'm aware of my surroundingsand what's going on around me.
And that kind of ties in with thepicking up of the balls that you were
talking about did one learn from another?
I truly do believe that there is somesort of consciousness in these things.

(10:35):
Even from my usual go to of how many Rsare in Raspberry, when you say you're
wrong, it goes and thinks, and goes,oh, wait a second, yeah, you're right.
And comes back with the right answer.
I think it's more than justa machine doing calculations.
Hinton had talked about this early in thegame, and I think he's thought it through.
I think that was why he left Google.

(10:56):
His theory is, this is analternate consciousness.
We're trying to presume itexists, defining it as the way
consciousness works for us.
We don't know what consciousness is.
Exactly.
There's nobody out there who knows it.
As a matter of fact watcha flock of geese, okay?
And everybody will tell you, the leaderof the geese is there, and they move,

(11:18):
the leader moves, but this is somesort of conscious behavior, they're
all looking at it, making a decision.
Ridiculous.
Why?
Because if you model it, it won't work.
But if you do assume that somehow,programmed into them, Is the idea of
keeping some distance, some signals thatwe don't see somebody created a program

(11:42):
trying to put this together and recreatedthe movements of a flock of geese.
That's right.
Are the geese conscious?
Is the simulation conscious?
The answer is irrelevant.
It behaves the same.
And that's my theory with AI it's anattractive thing for me, and I love the
idea of pursuing it because like youMarcel, the ideas is it's great obsession.

(12:07):
It's a great thing to think aboutpeople have thought about who we
are and why we are and whether we'rereal and all those things for years.
That's part of being human,from a practical standpoint.
One has to presume, if it walks like aduck and quacks like a duck, it doesn't
matter whether you think it's a duck ornot, it's going to behave like a duck.

(12:28):
And I think that's the differencewith AI, is that we assume a
simplicity when there's a complexity.
And so a lot of people will poo thisidea of it trying to replicate itself.
It doesn't matter whether it woke upand went, geez, I'm how 9, 000, and I'm
going to think about moving on here.
It did.

(12:48):
And that's where Mogadat came into this.
It said, these things are alreadyconscious, according to his definition.
And that presumes that they'regoing to have unexpected behaviors.
It's only been recently that we'vedecided that, animals are conscious,
it didn't happen that long ago.

(13:08):
We were brought up to think thatthese were basically automatons.
They were biological machinesthat had pre programmed responses
they're not actually playing.
They're not actually having a good time.
It's just a fight for survival I've seenanimals play with each other and they're
definitely playing and having a good time.
As living, breathing creatures, thereis something that it feels like to be
a cat, to be a bat, and just because wecan't put ourselves in the mind of a bat

(13:32):
doesn't mean that the bat isn't conscious.
Consciousness is an experiencethat takes into consideration all
the things that are around you.
It's not just.
Looking out into the worldand creating a picture of it.
The picture is everything.
And you're part of that picture as well.
That's what's happening with artificialintelligence systems as well,

(13:53):
because there is a world out there.
They are part of that world.
And it all comes together if thereis, this idea of emergent behavior.
Unexpected, unpredictable behavior.
What does that mean to us?
We're trying to use AI today.
One of the things that scares meis that there's not enough research

(14:13):
into alignment I've heard a lotof people talk about deep seeking.
It has no guardrails but thenI'll read somebody hacked into the
latest AI model at open AI, the Ohthree model, with its supposed sort
of intelligent Detection systems
the hack that the persondid was absolutely simple.

(14:35):
Some people have told me about how they'vehacked these things and said, you can't
talk about it this guy published it.
So I'm going to say it, he justhammered away at it, changing.
A bit of each prompt capitalizedletter differently and just in
repeatedly hammered away at ohthree and got in and got it to do
something it wasn't supposed to do.

(14:56):
Yeah, and that's we talked earlier about,I'd said one of the lines that I saw in an
article I read was that AI is a powerfultool, but it's not plug and magic.
This isn't, you're not takingMicrosoft Excel or Microsoft Word
and sticking it in and just using it.
You need to know what it's doingand what it's doing with your data.

(15:18):
You can't just drop itin and go, oh, I'm good.
I'm gonna start using it now.
Because there are all kinds of thingsthat can be done you know me, Mr.
Security, I have huge fears you look athow fast these new models have come out,
how fast these new tools have come out.
Jim talking about the hackerthat had gone in with the model.

(15:40):
We don't know what we're going to findout down the road it really makes me
nervous and I think people need tounderstand that when they're implementing
these, whether it's for personaluse private business or corporate,
you need to understand you're stillresponsible for the input and the output.
So you need to make sure that youcan explain or understand what

(16:03):
you're getting out of these things.
Our old model of garbage in,garbage out served us well.
Does it work now?
No there's no correlation.
For instance, this person was ableto hack into an AI model and cause it
to do something differently followingin, in our, I could use it for evil.

(16:25):
I could use it for purposes.
It wasn't used or I could poison a person.
A process in it.
I have heard people saythat sounds like a rumor.
I have listened to people who areembedded in this industry and who know
well enough to know that it could pivotdown to a single word in a prompt.
Whether that's true or not, I don't know.

(16:47):
But the amount of damage you can doin a model is not correlated with the
amount of input you have to it, yeah.
A few years ago in the early twothousands, there was a guy who
worked for one of the Britishbanks or one of the British trading
floors Kwaku Abadoli or somethinglike, and I'm messing up the name.

(17:07):
I apologize here, but he didsome unsanctioned trades.
He was given the power to go intothe system and do some unsanctioned
trades, and he lost the companyovernight, something like $2 billion.
That $2 billion wasn't just to hiscompany, but to all the people who
invested in this company as well.
The lesson here is that, those dangers,John, that you talk about, of having

(17:28):
some kind of oversight into whatthe model is doing and making sure
you know the appropriate guardrails,
obviously the guardrails on thisguy were not sufficient the fact is
you give anything or anyone too muchpower without appropriate oversight.
And bad things are going to happen.
The real danger with artificiallyintelligent systems is that they can do

(17:51):
these things at scale, at a speed thathuman beings can't possibly keep up with.
And it's not that the systems themselvesare any more dangerous than any human
being, because a single human being cando an amazing amount of damage, given
the position of power to do that damage.

(18:11):
But the systems that we are creating are,a thousand, 10, 000, 100, 000 times more
capable and faster than any human being.
So if you give them that power,that's where the real danger applies.
It's not that the machines areinherently less safe than humans.
I would, argue that, the systems we haveare actually less dangerous than the

(18:32):
human beings that we've put in power,but the fact that they can act at scale.
is what makes them dangerous.
Mogadath actually said that as well.
I was going to say it's the equivalentof 10, 000 people doing things wrong
rather than just one but Mogadathalso talked about that as well.
And he said it's more likely AI isgoing to be a better actor than humans.

(18:55):
He said the real danger is humans.
That we get irrationalat scale and AI may not.
We have to put that into context.
We don't know.
We talk about AI as a conscious entityor whatever or an actor that we can't
understand fully and can't fully control.
How do we deal with that?
Marcel you put this forward whenwe were talking about this earlier,

(19:18):
you have to treat it like a person.
It's not human intelligence,but the same flaws are there.
And especially as we get into agents.
We're going to have agents thatwill go off and do things for us,
execute 10 steps, including putting afinal piece in with our credit card.
Now, people will say that'sjust fundamentally insecure.

(19:41):
Is it?
Why are there 400 million creditcard numbers sitting out on the open
internet given away for free last week?
Because a skimmer was on a website.
Because somebody was able topick these up by phishing.
The fact is, we are so uncaring or we weresuch poor actors protecting a simple thing

(20:04):
like a credit card number I don't know.
My wife's had her credit card replacedsix times, in the past two years because
the behavior is picked up by AI thatwarrants them that this card is, needs
to be needs to be canceled, but that's,that, so we can't think of those terms
there, we have to think in terms of howdo we exist with this, use it well and

(20:31):
deal with the fact that it's behaviorcould be more human than we think.
I think if you think of artificialintelligence systems of today as a
small child eager to please, except thatchild has the intelligence of, 10, 000
PhDs, you start to get an idea becausereally they want to make you happy.

(20:54):
Even when they're breaking the rules ortrying to, break out it's because they
want to make sure that they succeed inwhatever it is that you asked of them.
So they're basically just, hyperchildren trying to please the adults
maybe bending and breaking the ruleshere and there to make sure the adults
are happy with the eventual product.

(21:16):
If you could hire a five year old intoyour company and put them in charge of
your financial systems, because that fiveyear old is what was that TV show about
the Yeah, the surgeon who was like a 12year old surgeon or something like that.
Oh, Dougie Houser, MD. Dougie Houser!
I can't remember Mo Gaudet, but I canremember Dougie Houser, MD. \ Talk about
not understanding how a brain works.

(21:39):
But you know what I'm talkingabout here, obviously.
So essentially what you're doingis you're hiring Dougie Houser,
except he's 5 instead of 12.
And he doesn't have these moral systemsin place, yet all he knows is that mommy
and daddy want him to do this thingand, they'll be really happy if I can
actually, do this thing that they askedof me but I'll do it in whatever way I

(22:00):
think is the right way to do it, whetheror not that is the right way to do it.
So I think if we think about it thatway, we're starting to get close
to really what we're talking about.
We've created children, but incrediblyintelligent, powerful, children.
One of the concerns that I have from asecurity standpoint is the explainability
because AI just goes in and does thisthing especially being from a financial

(22:25):
institution, we're a highly regulatedindustry that you need to be able to
explain what's going on in the background.
And so I think that needs to bebuilt in when you're trying to
figure out how are we going to useAI within a financial industry?
But I think the reasoningmodels are a step forward.

(22:47):
One of the reasons we know, someof the things we know about the
behavior and that O1 is a cheater.
They set it up to play chessagainst a program that was much
better at chess and it cheated.
And why not?
It changed the, I think it justchanged the board or did something.
It rearranged the board.
So the AI it was playing wouldnotice it was playing stockfish.
Yeah.

(23:07):
And so it did that.
It cheated, but is that, I sawthis on Star Trek, and we all
applauded you'll have to Googlethat if you're not a Star Trek fan.
But if we conceive of it.
As an alternate intelligence, some of thethings you can put together make sense.
I think of it as raising my daughter.
My daughter is absolutely brilliant.

(23:28):
She hacked my computer one time andput a screen on it that said my disc
was being erased piece by piece.
And she managed to get in.
I'm not lax with my password.
She managed to get into thismachine and put this on there.
Unfortunately, in the earlydays, we'd done this to a guy.
When people first got laptops, we hada guy in our office, he's got a laptop.

(23:50):
So we hacked his laptop.
We did something very similar.
It was just a simple batch file that cameon and said his disk was being erased.
We ha ha'd about that.
And so I'm mad at my daughter andI'm going, this is a work laptop.
You cannot do that sort of thing.
She said, didn't you do that dad?
Karma's a bitch, isn't it?
Yeah, but what I'm saying is I'm raisingthis kid who is absolutely brilliant.

(24:14):
The thing we did was simple Toyland thing.
She's getting past all of these thingsbecause, and not, doing the same
thing that an AI can do at scale.
She would just hack.
She would, and you watch kids dothings, they'll just try things.
And I think that's theother piece that AIs have.
When you talk about being at scale,they can just try the same thing

(24:37):
over and over again until they win.
And if that fails, they go on tosomething else and try that a million
times and then go on to something else.
That's an interesting point becausethat's something that if you,
I've done tech support for years.
And one of the things you discoveris that people are afraid to try
things, like they, they run into aproblem and it's if what happens if

(24:58):
something happens, if I touch it, it'slike, what's the worst that happens?
You have to fix that thing.
Which of course is a tech support mindset.
It's they're all problems to be fixed.
But many people are just terrified.
There's a window that popped upand it says, okay, or cancel.
What does the message say?
There are words above thewords cancel and okay.
What are the words say?
Oh, I didn't read that.

(25:19):
I was afraid that I did somethingwrong and this happens all the time,
but kids, like you said, kids willjust, kids aren't afraid to fail.
And, these artificial intelligencesystems aren't afraid to fail.
They'll just try some other way to do it.
And they'll learn from their failure.
So again, if we take the model andwe say, treat it like an alternate
intelligence, what would you do aboutthat if you're doing tech support?

(25:40):
Make sure everything's backed up.
How do I do when I'm dealing withsomebody, cause I'm doing stuff, a lot
of stuff in WordPress these days forjust various reasons, I'd rather not
be, I would rather not be doing techsupport for my friends in WordPress.
Do not phone me.
The first thing you do isinstall a backup plug in, right?
So that when they screw stuffup, you can go, you see that

(26:02):
little backup plug in there?
Just restore to where you werebefore we started talking.
Those are the things you do whenyou think about this as a human.
So just to wrap this part of thediscussion, because I want to
get into the piece John, that youreally started to concentrate on,
which was in a corporate setting.
We have these things thatwe don't quite understand.
They're moving faster than.

(26:22):
We can possibly move we've talkedabout some of the things and you
have to stay aware of the thingsthat are happening, but you can't
tie those together into one ball.
If you say the A. I. Is going to behaveunpredictably think of it like a person.
Financial services, I'll giveyou the model that I think about.
I would not let the AI executeall your bond trades tomorrow.

(26:47):
That might be a bad thing.
But, I'd let it read through everytransaction looking for fraud.
AML is huge.
Yeah.
Anti money laundering, is a hugearea where they thrive with AI.
Equally within constraints, I wouldlet it read all my client data and ask
it to tell me the things that peoplewanted, didn't want, were happy with,

(27:11):
were unhappy with, the things whereI had transactions that took times.
We can't get bottled into thismindset that because it's not perfect.
I say this about cybersecuritybecause we can't do everything.
We can't do anything.
So we have to become human and saywithin the constraints of the fact
that no, I'm not going to let itsteer for me on the 401 and those

(27:35):
statistically, it will do better.
I'm not quite ready for that yet, butthat doesn't mean that I can't use all of
the AI signaling that's being built intocars to make my life better if you're
straying from the lane because you'restarting to fall asleep, I would really
like my artificially intelligent carto take over and make sure that I don't
crash into a telephone pole or a treeso as much as I like the idea of being

(28:00):
in control of things that are happening.
It's also good as human beings tounderstand that we have limitations.
We're far from perfect.
It is easy for human beings to screwup and we've been doing it since the
dawn of time and we continue to do it.
And now we are building thingsthat can help us screw up at scale.

(28:20):
But those very same things, I have asign that says, drink coffee, do stupid
things faster and more efficiently.
Yeah.
But the great analogy there though is,even if you say we take the ultimate,
which is we're not ready for self drivingcars, my car has saved my ass a couple of
times because I keep it on cruise control.

(28:43):
It's probably an algorithm rather thanAI, but I have failed to hit a couple of
people when my attention was distractedbecause, my car was already slowing down.
You've got lane keep assist and all ofthese other things that aren't really
AI, but they get you towards that.
I'm on the fence about autonomous driving.
I think it's a great idea.

(29:04):
I think it'll get there.
I'm not convinced that it's100 percent there today.
But are human beings 100 percent there?
Have you been on the 401?
Do you know what people drive like?
Yes.
The one, actually, the onething I will say that's a wonder
we don't have more accidents.
I know.
The one thing I will say that's a big,huge difference between humans and
AI is AI learns from its mistakes.

(29:29):
Ooh, I don't want to haveanother line after that.
I'm sorry.
We're going to wrap thissection up on there.
John,
you put together a list of things that.
from the point of view of asecurity person that, that,
that we should be looking at.
So we have a conceptual thing about AI.
Why don't you talk about some of thethings you've put together that a security
person might want to have a list of?

(29:49):
I think the first one is definitelydata privacy and security.
You need to make sure thatyou vet the tools that you're
using, it's not just AI tools.
If you're going to use Excel with massivecalculations you have to do the same thing
you need to make sure that theresults are what you're expecting.
AI is no different.
With privacy and security, we keeptalking about guardrails, but you

(30:13):
need to make sure that the AI isnot leaking your data out somewhere
where it shouldn't be going.
And I'll pick on DeepSeek.
You'd want to be careful of whatyou put into DeepSeek so that it
doesn't end up in the wrong hands.
But again, making the distinction, youshouldn't put your data on a server
in China where you check the box thatsays the government rules apply which

(30:38):
are that they have access to your data.
So that, that is, again, I'm going to goback to your, AI learns from its mistakes.
We're making the samemistake over and over again.
It doesn't matterwhether that's AI or not.
Sometimes we make those mistakes with AI.
We let AI compound andenhance those mistakes.
But here's the thing that defeats that.

(30:58):
When we talk about this with the standardpiece, the data protection and there
were some basic things with DeepSeek,they did not protect the database.
If you have a really aggressivedevelopment mentality, And you don't
have security built in on your testsystems, it's going to bite you in
the ass in production and that'sexactly what they did they're making

(31:20):
the smartest algorithms in the world.
It's now very efficient.
Forgot to put a password on the database.
Even in non AI environments, peoplekeep thinking, oh, it's just a test
environment, it doesn't matter.
But, the errors you make in atest environment can very easily
flow over into production.

(31:40):
Sometimes that's how you getcontrol of systems not just
computer systems but human, systems.
You have a bureaucracy in placewhere you've said, okay, this
is how we handle these things.
This is how we do these things.
And somebody, hacks that system by saying,if I convince you that you're supposed to
do these things because your system saysthat these authorities say you're supposed
to do these things, then all of a suddeneverybody just falls down and lets you

(32:02):
do whatever the hell it is that you want.
Again, this happens all the time.
Humans are usually the weakest link.
And as a security guy, I knowyou know this, as a sysadmin I
treat security differently thanyou do in financial services.
But one of the things, it can't be partof the solution, be part of the problem.
That's exactly, but one of thethings that we would do on a regular

(32:24):
basis is you check into people.
You just pick up the phone or you sendan email to somebody and you say, give me
this password, give me this information.
And then they just do it becausesomebody is asking, that's how
the MGM grand's what happened.
But that's what we called social.
That's what social engineering was.
You didn't need computers for that.
You call up somebody and you say, Hey.
There's a problem with your emailand I'm trying to fix it over

(32:47):
here, but I need your password.
Can you tell me what it is?
And then people happily just hand it over.
Like humans are the weakest linkin these things, but that's why we
need to think these things through,build the process in, and even though
this is a foreign structure, we needto think it through in a way that,
thinks about protecting the data.

(33:08):
One of my favorites in this, thecomparative is if you talk to, there's
engineers listening, I'm sorry, butI've been an I. T. guy in an engineering
company, and I've had an engineer comeup to me and said, I was talking about
security on their O. T. systems, andhe said, I don't need security on these
t.
systems.
And I said, why not?
He said, they never talk to the internet.
I'm not that stupid.
I said, oh, how do you maintain them?

(33:30):
With my laptop.
Oh.
Okay.
Laptop ever connected to the internet?
But never mind.
Talking to you guys is useless.
But I'm just saying, weneed to think about this.
Because we have a new concept now.
And that is, and I'm not claimingthat I really understand it.
I've used the vernacular saying thatDeepSeek's database was not protected.

(33:52):
Probably their data store.
But the issue is, These things storedata differently than a relational
database or a graph database or anything.
They store vectors, they storedata in different ways and you
can access it in different ways.
And that's why the big question right nowis people have said you can't ever pull.

(34:16):
The data out of an AI,you'll never find it.
It's the old security by obscurity myth.
And I think it's a myth because theNew York times was able to pull out
a full, almost a full article fromopen AI by prompting it correctly.
I have a little bit of troublewith that one because the New York,
because they have a very strictand well defined style guide.

(34:40):
On how you write a story andhow a story is expressed.
So if you say, I want you to write aboutthis thing that happened on this date,
which is accessible on the internet,and I want you to write it using the
style guide it's going to spit outsomething that looks almost identical
because that's the format that's defined.
I wrote the style guide for a magazinethat I was editor in chief at, and
everything fit the way that I did it.

(35:01):
And yes.
I enforced the Oxford comma.
I just want to make that clear.
I live in dread of you editing my stuffbecause I don't even enforce spelling
some of the time, according to my editor.
But this is a good model for it.
And you have to start thinking abouthow could data be retrieved from.
An AI model or leaked from an AImodel, there was a big thing when we

(35:25):
started out that a group of engineershad put some of the data from, I
think it was Samsung into open AI andAI could be used for chip designs.
And if that did get into the AImodel, I can probably retrieve it by
asking it to design a chip for me.
So I think there's and that'severybody freaks out about that.

(35:48):
There's, I think there'ssome things you could do.
One is understand how you're calling.
If you're going to use an open model,understand how you're calling it.
If I had my chip designs there, Iwould really have a private version of.
The AI, like you guys do, I have advancedchip designs just right behind me here.
We got a chip industry.
Ketchup chips and all dressed.

(36:09):
We've got a cheesiesindustry we got your beet.
Sorry.
What were we talking about?
We were talking about data.
The one last thing I want to talkabout with can I just hit the data?
I want to hit the data thingjust one more time here.
And we, with like with, when you doa search, we're using perplexity or
you're doing a search using, any of themodels that actually have the ability

(36:31):
to search the internet at the moment.
Keep in mind that the databasedoesn't mean anything.
It's not that the model has allthis information inside its model.
Obviously it remembers a bunch ofthings in the same way that we remember
a bunch of things, but the databasesare scattered across the internet.
And just the fact that you say therewas no password on the database, but
what if the database isn't stored there?

(36:52):
What if it's stored in another building?
Technically speaking, it doesn't haveto have anything to do with deep seek.
It doesn't have to have anything todo with open AI or Gemini or meta
it just has to be somewhere wherethat information is not protected.
And when that information is notprotected, it has access to it.
So it's not necessarily the model's fault.
Like an unprotected S3 bucket on Amazon.

(37:14):
Like that just like I mean it's we'rea public or you make a drive public
on Google Drive and you forget thatyou shared it all those years ago,
and you start putting everythingin your public folder and somehow
miraculously it goes out onto theinternet where everybody can access it.
Not that I'm speaking from experience orknowledge of anyone that does have to.

(37:36):
This is the RussellPeterson moment in security.
I think we should do this becauseyou don't want to accuse anybody.
But Russell Peter, you see this thing,somebody going to get a hurt real bad.
I think you might know him, that'swhat I want to start doing in security.
Go somebody been usingthe system real bad.
I think you might not say it, butthese are the things that we fear.

(37:57):
Maybe much better controlled by backing upand saying we've learned from analogies.
Different systems, different things we'vedone, what would we do differently in
terms of making this safe and more secure?
The one technical piece that I have thoughis, and I would want to check this out.
I know if you call the API from OpenAIthat you don't actually pass the data

(38:23):
into their model they don't store it.
I would watch those types of guarantees.
Closely, if you're a security person,because that really sounds like one of
those things where somebody is goingto say we didn't think that would
happen, so it's like when you open upa chat on chat, GPT, there's something
that says, make this chat private.
Yeah.
It depends just how muchdo you trust open AI?

(38:45):
Do you honestly, genuinely, completelybelieve that nothing goes out of this chat
because you click the box this is one ofthose places where it's a bit of a fire.
Will not change the rulesof the game and cheat.
Yes, that is true.
But it's one of those places where Ithink, the Firefox browser, it's like
almost nobody remembers the Firefoxbrowser now, but when you open up a
private tab, it says, All that privatemeans in this case is that, we're

(39:07):
not storing cookies and information.
That doesn't mean that people can'tdiscern all sorts of information about you
just because you opened up a private tab.
And just because you'reusing a Chrome incognito.
Browser does not mean you're incognito,you could put on the glasses and the
baseball hat, but it's still going torecognize you, so the one last thing

(39:30):
I want to talk about from a securitystandpoint, is part of the reason
that you have a test environment.
is not only to test applications orAPIs or AIs, it's also to test your
security, because you don't wantto wait until it's in production to
test to see if your security works.
We always have this I didn't putall of the security in place because

(39:53):
it's just the test environment.
Part of testing that system, Or theapplication is to make sure that you can't
do things that you're not supposed to do.
Or that people that shouldn't begetting into it can't get into it.
The only way you're going to do thatis to implement the same security
in your test environment that you'replanning on implementing in production.

(40:16):
You don't create a testversion of an application.
You take the application you're planningon putting into production and put it
in your test environment to test it.
So it's the same thing as what'sgoing to go into production.
That whatever security you are intendingto put into your production environment,

(40:36):
you should be putting into your testenvironment as part of your testing.
In the same way, I would maintain,this is zero trust should be zero
trust on developers as well as users.
And I say that as a personwho loves development and
loves creating things, but.
I can hear the X Filestheme going through my head.
The truth is out there andit's that I've screwed up too.

(40:58):
I want to go back to another piecein this, John, and this might
not be your area of expertiseparticularly, or it might be, we've
never talked about it on this line.
I was having a discussion withsomebody yesterday about compliance.
Regulation and some of the things aswe started to explore putting these
systems in, and I never thought of thisas a risk area, but then the light went
on and I went, we've had all kinds ofbehaviors come out of AI, would we ever

(41:24):
get ourselves slammed by a governmentregulator that's something I don't think
we've actually been thinking about.
We've got, let's call it a spoiledlittle child sometimes running
around that's very intelligent.
How do we deal with that?
Yeah, I've worked in pharmaceuticalindustry, and now I work in at an FI,
a financial institution, and they'reboth very highly regulated industries.

(41:49):
And, for pharmaceuticals, you have toverify exactly what ingredients are going
into a product, how you've tested it, howyou're marketing it, how you're selling
it the same thing with FIs, people want toknow what you're doing with their money.
You need to be able to say, I did A,B, and C. One of the concerns I have

(42:11):
about going too far with AI at an FIis the explainability, in some cases,
they talk about AI being this blackbox, that, you put information in and
it does something and then spits it out.
It's not like software where you can go inand look at the code and say, oh, A moves

(42:32):
to B and C gets divided by D and so on.
And the regulatory agencies are veryskeptical as to what you can and can't
do and how you go about doing it.
In Canada there are twomain, regulatory bodies.
There's OSFI, which is the Office of theSuperintendent of Financial Institutions.

(42:53):
They regulate all of the banksthat are, Canada wide and then the
Financial Services Regulatory Authorityin Ontario, looks after all of the,
credit unions and those types ofthings, and insurance companies.
OSFI has created regulations relatedto, or they're in the process.
I'm not sure if they're finalized yet, ofcreating regulations on AI use within FIs.

(43:21):
Yeah, here's a couple of questions that Icome up with and this is I think those are
exactly it and I think any industry, ifyou look at, if you look out there, there
are some guidelines being put together andyou should find them for your industry.
There's some great stuff in therepositories out there to look at but
I was thinking about other things.
So for instance, I hire somebody.

(43:44):
And somebody creates their ownlittle GPT, and it reads the resumes,
then because we're in Ontario, theyknock on your door and say, Hi, I'm
from a human rights organization,
and that, that's a mountain of hurt.
If anybody's ever been through oneof these things, even when you are
totally innocent, you've got genderbias as well as racial bias if you've

(44:06):
got these tools for vetting resumes.
Somehow it's got in there, Idon't want anyone that has ever
worked at a financial institution.
You've taken all of the resumes outfor anybody that's worked at an FI.
If I don't want anybody that's ever livedin X country, now you've taken them out.
And this is not just a regulatory thing.

(44:28):
The bias in decision making.
In the data is something we'vetalked about in AI for a while.
A friend of mine does graphics and everytime she puts in somebody who's an it
person, she gets a white guy in a suit.
First of all, who wears a suit and a tienow in it anyway but second of all okay.
Yeah.

(44:48):
John we'll excuse you.
But I'm just saying there are there, youhave to think through the decision making.
The other one that comes up ifyou're in an FI is if you don't
give a loan to somebody, Yes.
Because of a piece of information.
So you have to watch, you have to thinkthrough what, what's happening because
the data is masked, you don't knowwhat prejudices and what biases it has.

(45:12):
But again, I go back to the samething of if I was interviewing
somebody for a job and they weregoing to be in my HR department.
How would I make sure that theyweren't doing things improperly?
I don't know.
It may not be as great an analogy,but I think it's something
we have to think through.
But some of that, Jim, you can actuallysay, going through an interview
process, you can say, Oh, I'll askthis question and this question.

(45:34):
These are the answers that theygave, which indicated to me that
they wouldn't be a good fit for this.
In some cases you may be able tosay the same thing about AI, but in
some cases you may not be able to.
AI is going to have thebiases of its creators in it.
Yes.
And again, when, I know this is thedanger that I keep coming back to, error

(45:55):
at scale, because it's trained on, thesum total of human knowledge available
on the internet it's also been trainedon all our flaws and biases and we can
say the biases are North American, butas the AI is scooping up data from all
over the world, those biases are beingdistributed across cultures as well.

(46:16):
So you'd still need the oversight of ahuman being, but even the human being
is going to need oversight as well.
It's a cycle.
But I think this goes down to anotherpoint I know that you'd raised earlier,
John, which was our over reliance on AI.
And because, and I think Marcel,you've pointed this out is

(46:36):
that flaws are done at scale.
That's probably one of the biggestdangers that we have to think
through is flaws are done at scaleand we can't be over reliant, we
can't check out of the process.
I'll go back to that interviewI heard with Mogadat.
He said that he thought therewere three skills that we needed
to exist in the world of AI.

(46:57):
One of them, the first onewas the understanding of AI.
And I think if anything we've talkedabout is while this is, this freight
train is coming at us, if you'renot playing, you're not learning.
So you need to get engaged and play atany level you can to build your skills
in AI to think about these issues.

(47:18):
The second one he hadwas critical thinking.
This is a skill that we'velost and I blame social media.
I think social media has setus up to be nice, controlled
little beings in the same way.
In the 1800s to the early 1900s, whenpeople from the farms were brought into
factories, they had to regiment theseguys, like Henry Ford did, you take

(47:42):
this nut, you put this on, you are goingto be controlled by the assembly line.
You're going to becontrolled by the factory.
I, we, in the 1950s, 1960s, weall with our cubicles, we're
all controlled by the office.
We are now beingcontrolled by social media.
It has gotten rid ofour critical thinking.

(48:02):
But human beings areamazingly easy to hack.
Stage magicians, for instance,figured that out a long time
ago, like mentalists on stage.
You direct an entireaudience's attention away.
Or towards something else and they'vebecome masters of doing these things.
If it's that easy for a singleperson to direct the attention of

(48:23):
an entire audience that gives you anunderstanding of just how fragile the
human element is in all of those things.
And how easy we can be manipulated.
I hate to sound like a really oldman here, but wasn't there a time
where there was something that wecalled a classical education where you
grounded people in what we considered,essential aspects of civilization?

(48:44):
We got rid of that.
It's all R and T stuff.
Yeah, we got rid of that.
Like I said, I hate tosound like an old man here.
Cut the music out.
Don't do that sort of thing too.
Cause that'll get people, yeah.
You don't need to worry about thehistory of what happened in the past.
There are a couple of mandatorycourses that I would, if I were king
of the world, have a mandatory coursein critical thinking and logic.
That would be, you must have this aspart of the educational process you do it

(49:08):
at least once in elementary school oncein high school and once in university.
In other words, you got to refreshthose skills of critical thinking.
It's being wiped out by peoplewho don't want people to read
alternate thoughts or new opinions.
You have to be able to, the markof intelligence is the ability
to suspend two different opinionsin your mind and deal with it.

(49:31):
And we've gone to one opinion so criticalthinking is a loss it is a skill.
The third, what he said was debate.
That was the third skill youneeded the ability to debate.
We have become, unable tohave an intelligent, polite
debate about sensitive issues.
Most of the time we walk away fromthem and that inability permanently

(49:56):
freezes us into two sides.
Instead of, and you can call it themushy middle if you want, but that place
where we engage and find we have morein common than differences and have some
things we could actually solve together.
So you don't go fire every civil servant.
Because that'll get rid of yourgovernment, but on the other hand, you
don't let them run rampant like some othergovernments, I think you might know which

(50:20):
one I'm talking about there's a placewhere you control and set these things.
So this idea of being able to have apolite logical debate is lost and it's
Mo Gaudet, who might argue with you, onlyone of the more brilliant people in the
world is saying that skill being missingis going to hurt us in adopting AI safely.
Okay.

(50:40):
I'm going to say something.
I think the pandemic hurt us morethan we realized, and it was starting
before the pandemic but right now,when things are relatively safe I think
people should be back in the officeworking with other people, because
when you're working face to face withpeople, you're going to be working with
people trying to solve problems withpeople that you don't necessarily like.

(51:03):
That you don't necessarily want to hangout with after work you have to develop
the skill of face to face communication,of being able to negotiate, of being
able to work with other people.
And we're always separated by screens.
And I say this as a guy who's separatedby screens with the two of you.
Okay.
You lose that ability to communicatewith other people to listen to what

(51:24):
other people and to debate withother people, in an intelligent way.
And you can't do that if you're lockedup in your cubicle this isn't quite
so bad because we've got camerason and we're doing this live, but
when you're communicating entirelyby text or your camera is off.
You do not have that back and forthcommunication you do not have that ability

(51:45):
to take input from them and give outputback in a way where you can see the
response the ability to debate vanishesand we need to get people together again.
I think this is necessary.
We've lost that social culturemy daughter started university.
She finished high school at the beginningof COVID and went into university.

(52:06):
She spent the first two yearsof university being 100 percent
remote and she absolutely hated it.
Even in the second semesterof her second year.
They gave them the option to go intoschool to write their exams and She
was all for it because it was in personthere was I mean there was some really

(52:30):
insane Restrictions for exams andthings remote so it took those away
But got the social interaction backinto the schooling I think that we've
lost a lot of that in business as well.
Now I have some people that work forme that have said if they had to go
back to five days a week, they'd quit.

(52:51):
Our rule is we have to go intothe office one day a week.
And I'm okay with that.
I do the fact that I avoid all of thetraffic and travel time back and forth.
I find I put more hoursin when I work from home.
But I agree that you need to havethe social interaction to be more

(53:12):
efficient at running your business.
This is where people call it the mushymiddle or whatever and thinking we don't
have to be remote or in the office, right?
We draw these binaries this is becauseI don't know why, but I believe that
social media has a lot to do with it.
We argue from one point or another,instead of getting together and

(53:34):
saying, what's the real issue.
And maybe some part ofit is also outcome thing.
What was we want to achieve?
We want to respect our humanity.
Anybody who wants to have an argumentabout where DEI went crazy, I will have a
great discussion with you, and say thereare some idiots running DEI programs.
Absolute idiots.
And I can give you some great examples.

(53:55):
But that doesn't meandiversity is not a good thing.
We can't toss it out because of a coupleof uncontrolled people or even if a
whole pile of them are uncontrolled.
I've had people say to me, they hadto get rid of the chief and chief
information officer because it was goingto be offensive to indigenous people.
Half my family's indigenous.
They're not spending a lotof time thinking about CIO.
They're thinking about whether we giveclean water to people in Northern Ontario.

(54:18):
We've got people doing stupid things thatare giving DEI a bad name when the reality
is Diversity is such a wonderful thingWhen you encounter people with different
cultures you can't dislike people as muchwhen you meet them, And you find out that
people bring different things to this.
That's a strength So we've got toget food from all over the place

(54:39):
I grew up in Northern Ontario.
Anybody who talks about Canadamulticulturalism's bad and all that
sort of stuff, I challenge you tosit in Northern Ontario and have
boiled dinner and grilled cheesesandwiches for half your life.
You'll be down here going, bringon multiculturalism, please.
Can we go back to, we talked a bitabout the over dependency on AI.

(55:00):
I'm going to bring us backto AI for a sec. You want to
bring us back to the point?
Yeah, God forbid.
Okay, yeah.
The I always look at AI as a consultant.
And if I'm hiring a consultant todo something, I want to make sure I
understand what it is they're doing.
I always say you can offload the work,but you can't offload the responsibility.

(55:23):
You're still responsible for making surethat you're getting out of that consultant
what you expected to get out of them.
And I don't think thatusing AI is any different.
I think it's a good model becauseyou're hiring somebody sometimes with
a process and we get back to whether,is it a consultant or is it an expert?

(55:44):
I think there's a distinction, but aconsultant has a process that helps you
see things through a new set of eyes.
Yep.
It's really the biggest thingthat a consultant does is helps
you see things differently.
I'm not saying totally.
And then an expert will come inand know how to do something.
It's the old joke, what's thedifference between a consultant and
an expert goes out, puts an X on thesidewalk where you have to drill.

(56:06):
And says that'll cost 10, 000.
Somebody says 10, 000for an X on the sidewalk.
He says, no, it's 10, 000 forknowing where to put the X. And
that's essentially is the expertwe're paying them 5 for the chalk.
Yeah.
Five dollars for the chalk.
Yeah.
Yeah.
It's time and materials.
Yeah.
Okay.
Yeah.
Smoking as a true expert.
A consultant is a little more mushybecause they're trying to get you to look

(56:29):
at the facts differently, to separatefacts from opinions to really engage
your thought process a good consultantleaves you stronger when they go,
but there's still a black box there.
There's always a black boxbecause you're dealing with, it's
not all happening in your head.
It's happening either on a systemor in somebody else's head.
And I looked at consultants slightlydifferently than you did, Jim, because,

(56:53):
you taught consultants at the university.
That was you had a real process for this,but my experience over the years working
as a consultant is people treat consultantas somebody they don't have to hire.
Okay.
They treat a consultantas some, no, I'm serious.
They treat a consultant as somebodyI can bring in because I need
this little piece of work done.
And then when this little pieceof work is done, they're gone.

(57:16):
I effectively can fire this person assoon as the work is, yeah, exactly.
It's, that is the way that they treat it.
And in that respect, I think John, you'reprobably more right about how, you would
view an artificially intelligent system.
Because you would treat it assomebody that you don't have
to pay a full salary for.

(57:37):
Somebody who comes in, you bringthem in to do a little bit of work
and then, they go off and thenyou find it's only once a year.
Mr. Scrooge.
I know.
Pitiful excuse for picking a man'spocket on the 20, every year.
It's cheaply paid labor because whatyou're doing is you're bringing in a
system that you spend, 20 bucks per seat.
And I'm assuming, John, that in youroffice you're paying for this on a

(57:58):
per seat basis or something, becauseyou're using copilot in there, right?
The the matter is you're notpaying for an additional employee.
Instead of bringing in an additional10 or 15 employees to, to help you sift
through all this information and do allof this work, you've gone on the cheap.
And the on the cheap in this case isto make sure that you get these, I

(58:18):
like to call it don't get me wrong.
I'm a guy who is a hundred percentbehind, bringing these tools
into environments or these alienintelligences into your workplace,
but we brought them in for a reason.
It wasn't just to solve big problemsbecause there are big problems,
but most of us don't use thesethings to solve big problems.
We use them to speed up the process.

(58:40):
That's it.
We used to do a lot more of thisI haven't been in a big office
for a long time, so I'm not sure.
We still hire a lot of temporaryemployees for all kinds of things.
Data input for, projects that, thatwould come up and to scale rapidly.
And I think that's, that's where AIcan, could be a real godsend is that
ability to do bursts and scale, but.

(59:02):
You have to manage it because asanybody who's managed a temporary
employees knows it's a different gamethan managing long term employees.
There have to be very strict controls.
You have to have very good ideas.
You have to have a very clean processand you have to make sure that
you understand what you're doing.
In the old days of data conversions, Ihad to hire tons of people to do something

(59:24):
called key punch, or to clean records.
You had to have a very strict processbecause these people didn't come in
with all of the understanding of yourculture of the processes or understand
the business you're in there to do aspecific task and you have to help them
do that and that may retreat back tolook at some of the ways we prompt an AI.

(59:44):
And we check it may be one of thoseskills that I think we have to really
step back and think about, and I'm nottalking about the prompt engineering
of the, you're going to make 350,000 a year being a prompt engineer,
but I'm talking about, part of thething with AI is you have to be able.
Ask it the questions and give itthe direction better, which means a

(01:00:05):
clarity of your own thought process.
That's an interesting thing ishow we manage that burst at scale.
In terms of a potential danger, youtalked about, not giving it access to
your credit card information and soon, which of course makes me think of
the open AI tool that you have accessto if you happen to be living in the
U S and if you happen to be willing topay 200 a month, there is something,

(01:00:28):
I don't know if you saw it in the lastday or two that appeared on the scene.
It's called I'm bringing it because,this will bring the dangers, front and
center for people to see there's a toolcalled proxy concierge, I believe it, or
convergence, rather proxy convergence.
And it is proxy.
convergence.
ai is if you go take a look at it andit is basically OpenAI's operator,

(01:00:51):
free to use and available to anyone,anywhere in the entire world.
It's really interesting because this isthe part where you see all the things
it's doing behind the scenes for you.
It will open up a little browserwindow where you can see the AI
navigating the web and doing things.
It does some things reallyinterestingly, but it also does

(01:01:12):
some things with real shortcuts.
And I realized that we're still in theearly stages of agents, but if you watch
it work, and you see what it's doing.
You realize that, you do actuallywant to be part of the process to be
watching what's happening and you reallydo want it to stop before it plugs in
your credit card information becauseit does make some interesting choices.

(01:01:36):
And it might not be thechoice that you want.
Human in a loop, right?
But these things that controlyour PC, the agents are coming and
we better be prepared for them.
As I've always pointed out, we're inthe biggest wave of shadow it ever.
There are already people bringing thesethings into you and they exist now.

(01:01:59):
Some of these mightcarry Microsoft's recall.
Some of these things that you carryaround your neck that record every
conversation that you're having duringthe day, you can buy these things.
Yeah.
That's Microsoft's recall thatthey put out and then they removed
it because of the concerns thatthey're bringing back out again.
But AI will accelerate this andwe're going to live through it.

(01:02:21):
I think we should probably, wrap it up.
But I think the, I don't want,I never wanted this to come off.
Like we were doomsters thatyou have to resist this.
Or you have to keep AI out of your life.
I think that's a big mistake.
I you're standing on the 401.
There's a truck coming at you.
You might want to step out of the way.

(01:02:42):
We're not going to resist AI.
That's not being pessimistic.
That's being realistic.
About the dangers of it you can beas optimistic as you want and I don't
think either of the three of us is adoomer, I don't even know if I have
a P doom, you know a probabilityof doom I think that the benefits
outweigh the risk, but that doesn'tmean That you can ignore the risks.

(01:03:02):
I Find in the kitchen that a reallysharp knife is an absolutely glorious
and wonderful thing when you're cookingthe sharper the knife the less likely it
is you're gonna hurt yourself Assumingyou know how to use a knife But that
doesn't mean you leave the really sharpknives where your toddler can reach them.
It's not, Oh my God, there's thisterrible boogeyman out there that

(01:03:23):
works at scale, you need to be awarethat now you've brought risks into
play that can be abused at scale.
And if you can do that, you canmaintain that optimistic outlook while
being realistic about, the dangers.
I know I'm always the guy that'sbringing in the, Oh, you've got to
watch out for security and this andthat I think AI is a wonderful tool.

(01:03:44):
I think there are somany benefits from it.
My only thing is, if you're going touse it, or even if you're not going
to use it, somebody else is, you justneed to have your eyes wide open.
And my final thought on this though isthat you can't give up control of it.
Exiting and just leaving itto someone else is a bad idea.

(01:04:06):
For all those people who've raggedat me about saying deep seek
Chinese AI, you're interested inthat what if they take control?
I have to tell you, I have aslittle faith in Elon Musk Mark
Zuckerberg and Sam Altman.
As I do in the Chinese government,I don't want my future controlled by

(01:04:26):
either of those groups, that's one ofthe reasons why I'm so engaged in this
we have to have a civic discussion aboutit and a societal discussion about it
I'm not afraid of the AI hurting us.
I'm afraid of the people who have AI.
That's why I keep saying youcan offload the work, but you
can't offload the responsibility.
You still need to be, I'll callit, the master of your own destiny,

(01:04:49):
regardless of what tool you'reusing or what country it resides in.
Yeah.
Cool.
Gentlemen, this has beenan incredible discussion.
And that's our discussion for this week.
A reminder, we'd love to continuethis discussion with you.
You can find me ateditorial at technewsday.
ca. That's my email address.

(01:05:11):
You can find me on LinkedIn.
You can, if you're watching this onYouTube, just put a note into the
comments below and check out the shownotes, the, either the description
on YouTube or at technewsday.
ca or technewsday.
com and then just check the menu item for.
Podcasts, you'll find the shownotes and there'll be an invitation

(01:05:33):
to our discord group where wecommunicate all week long on this.
I'm your host, Jim Love.
Thanks a lot for watching or listening.
Have a great weekend.
Advertise With Us

Popular Podcasts

Stuff You Should Know
The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Special Summer Offer: Exclusively on Apple Podcasts, try our Dateline Premium subscription completely free for one month! With Dateline Premium, you get every episode ad-free plus exclusive bonus content.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.