Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
hi, and welcome to Project Synapse, ourweekly show where John Pinard, Marcel
Gagne and I get together to discusswhat's happening in AI And we feature
a lot of these shows on, uh, hashtagtrending on the weekend this week.
Uh, there's just no otherway to start the show.
I normally get everybody togetherand we do an intro, but Marcel
(00:23):
started on a fascinating topic and.
We're gonna join us rightin the middle of that.
Okay.
Where the hell am I?
What, who am I, what am I doing?
It's
funny.
I have that, I have, I askthose same questions every day.
(00:44):
Who, where, who am I?
Where am I?
What am I doing?
Oh, yes, I remember.
And does anybody
answer?
Usually me, whichfrightens me a little bit.
Oh, yeah, that's, yeah.
That could be scary.
Yeah.
And especially if the voice sounds likeit's coming from outside your head.
Are you familiar with theconcept of the bicameral mind?
(01:05):
I know the term, but I haven't, Okay.
I have a blog.
I have a blog post on that topic.
And interestingly enough, I tie it intothe concept of artificial intelligence,
which which to me is fascinatinggiven that the theory of the bicameral
mind goes back about 3000 years ago.
If you go back to the writings of aactually the theory is modern, but
the idea goes back 3000 years agobecause about 3000 years ago, if you
(01:28):
go back and take a look at earlierwritings, people always talked as
though there was a second voice.
In their heads as though therewas actually a second entity.
They thought about it as a God, asan angel, as a whatever, but it was a
separate defined entity that would tellyou what to do or tell you what not to do.
Not like a conscience, but an actualhonest to God voice that spoke to you.
(01:51):
And about 3000 years ago, over thecourse of a couple of centuries,
all of that faded from the writing.
And people would talk about, talkingto yourself as opposed to an actual
separate entity in your head.
It's almost as if something happenedto our minds to shut off that second
intelligence, so to speak, that cointelligence that we lived with.
(02:12):
And really?
Yeah, and it's throughout the world.
Like you can look at writingsthroughout the world.
It happened about that shift,that schism the by al mind.
Yeah, bicameral mind.
I have a, here, let me find, and you
could actually be your own best friend.
Except that this wasn't really abest friend in the sense that they
were actually, they were, it wasyour best friend because he was
always there, but it was like, it wasreally or sorry, it was always with
(02:35):
you rather, is what I should say.
Yeah,
it's funny because we had when we hadwhen Christine was young, she used to
have a friend called Kymers who wasin her ear, and I went, oh, Christ,
we've got, here it comes, the earlyonslaught of schizophrenia and but
apparently that's quite normal for kids.
But, and then the friend just goes away.
(02:57):
And
apparently adults used tohave that friend all the time.
That's interesting.
As in, yeah.
So we all had it and it stuck withus throughout our entire lives.
And somewhere, like I said, about3000 years ago in the literature, it
slowly over the course of a couple ofcenturies starts to fade away completely.
I pasted the link where I wrote about it.
And the reason I called it the New bi,my bicameral mines is because in a world
(03:21):
where everybody is starting to developtheir own personal co intelligence, and
you have to remember, if you look atany of the literature on how chatbots
are being used, you know this, Jim, Iknow 'cause you've done a segment on it.
Something like 40% of what peopleuse ChatGPT for, as in four out of
every 10 people use it as a chatcompanion, as a possibly a romantic
(03:43):
companion, a friend or a a therapist.
Okay.
So they're not using it toanswer questions or do research.
They're using it as aconfidant, as a best friend.
And I was theorizing that what we'redoing is we're creating the new bicameral
mind, the AI that's always in your ear,that's always ready to talk to you Yeah.
And answer questions and so on.
Interesting.
Yeah.
(04:04):
Reading a Robert Sawyerbook right now Hominid.
'Cause I'm gonna meet him.
I'm at this, I'm at a book fair with him.
He goes to the more famous bar.
He gets to speak for an hour.
I get to speak for 20 minutes.
So that tells you what the book Rob's my
buddy Rob.
Yeah, Rob's my buddy.
He's like my brother from another mother.
Really?
The first book.
The first book Hominids.
(04:24):
You've got Hominids.
Yeah.
Is that the one you're reading?
Yeah.
Okay.
Yeah.
Look at the dedication on the first page.
Oh, I got an audio book, so Ididn't, I for the first time.
Ah the book is dedicated
to Sally and I.
Oh, interesting.
So that's cool.
Yes.
So anyway so Sawyer'sgot this idea of this.
The I don't wanna spoil it for people,but the main character in the book is that
(04:49):
turns up and it is has a end Neanderthal
Oh, you gave it away now.
There you go.
Yeah,
it's, John's not gonna remember this.
John doesn't have thatkind of story memory.
Come on.
Okay.
There you go.
So anyway but we have, don't forget,I'm the youngest of the three of us.
We have at least threepeople who listen to us too.
There's, I actually thinksite more than that, but
(05:10):
that, given that the book said,but that's a surprise to me book.
We're that little voice
for thousands of peopleon a Saturday morning.
We're that little voice in their ear.
That's scary.
That
little voice at your ear head out anyway.
Party.
Yeah.
Don't abuse it personally.
Anyway but he has this this concept of a,an AI that follows you around everywhere.
(05:32):
And this is from, what, 2014?
Maybe This book, I don't know.
It's early.
Could be earlier than that.
But he's already got this idea of an AI.
That, that is, follows thecharacters around everywhere.
Interesting.
And I think he calls thealibi, which I won't give away.
Yeah.
It's the alibi.
He wears it on
and you wear it on your wrist.
Yeah.
You have one that that you canhear in your head and you wear
(05:53):
this thing on your wrist and Yeah.
But they have no crime because they,because everything you do is seen and so
it's hard to get away with anything very.
It's a really interesting book.
I'm glad, I glad I picked it up.
So there you go.
And I, then I'm going, I thinkin Minden, I'm on what I was
(06:14):
thinking, I think it's on the 12th.
I'm doing a book fair with him.
Like I said, I'm actuallya little thrilled about it.
Cool.
Yeah.
So it, tell him,
I tell him I said hi.
I will.
Yeah.
If he talks to me, I don't know ifthey, I don't know if the guys who
really sell come down and talk tothe low life science fiction writers.
Afraid for the record.
And I'm not just saying this'cause he is my friend, but Rob
is like one of the most personableguys you'll ever meet, oh, good.
(06:36):
Like he is a, he's aninternational bestselling author.
He's won more awards in science fictionfantasy than almost anybody else in
the world, including all of the threemajor awards, including the yep.
The Hugo the Nebula.
He's like right up there.
But that's the trifecta.
But Rob talks to everybody and he's afriendly guy and like I said, he is one
of the most personable people I've met.
And, good from a professionalperspective, he's highly approachable
(06:58):
and he'll chat with anybody.
He has me as a friend, so obviouslyhe does chat with anybody.
There you go.
I can, yeah.
Why am I afraid?
How could I possibly offendRobert Sawyer after he is met you?
I'm sure I've offendedhim more than once, but
I'm doing a talk on AI in writing it.
It's, as, as well.
(07:19):
Oh, that should be
explosive.
I'd be love to be the flyof the wall for that one.
Should,
that should set me up reallynicely from the start, right?
Yeah.
Yeah.
I'm not allowed to discuss
those things at parties, bythe way, just so you know.
Yeah,
I, that's probably whythey got me on before noon.
'cause they wanna make sure thatnobody's had a drink first, cool.
Let's get started.
If we haven't, we're already started.
We're not started yet.
We haven't started yet.
(07:39):
This is Project Synapse.
In case you've come in throughthe middle of all of this.
This is our Friday morning AI talk.
I'm from Project Synapse andI haven't had an AI in a week.
No, that's not that sort of ai.
It's ai, yeah, that's ai.
That's a different, I'm in adifferent solve that, yeah.
You got Rubik's Cube there.
That's good.
Okay why do you have aRubik's cube sitting there?
(08:00):
So I can pretend thatI've already solved it.
Oh, he took all the stickersoff and put them back on.
I, that's what I did The first timefriend of mine gave me this Rubik's
cube and said, you'll never solve it.
And I went and I tried for I 20minutes and then I got bored and I
just peeled all the stickers off, putthem back on, handed it back to him,
and under an hour, and he went, what?
(08:22):
Okay, so what's happened?
Nothing happened this week.
Is you went away last week, Marcel,so did have two weeks to catch up on
I did.
I was
Newfoundland by the way?
No, it was Nova Scotia.
I went to Nova Scotia.
Oh, Nova Scotia.
You didn't go to Newfoundland.
Oh,
okay.
I went to Newfoundland last year,but this year I went to Nova Scotia.
So I've, now, I've done all the provinces.
The only, I haven't done any ofthe territories yet, but Nova
(08:44):
Scotia absolutely loved it.
And in fact Halifax wasone of those cities that.
Honestly, I couldn't believehow fantastic it was.
And it was one of those where I couldactually imagine myself living there
because it, it's such a glorious,walkable, friendly, well-designed city.
I love their thinking about it.
Yeah.
It's gonna be what Torontowas a livable city,
(09:09):
yeah.
Yeah.
So I loved it.
We went to a bunch of other places.
We went to Baddeck, wetook a little drive up.
The Cabot Trail went actually went asfar out as a place called Louisburg,
which is a oh yeah, the fork.
Oh, it's amazing.
Actually.
I couldn't how much.
I love that.
It is, it's exquisite is the right word.
They do it so
well.
Oh, wonderful.
(09:29):
Wonderful.
Seriously, Canadians who are listeningto this show, you have to go out to Nova
Scotia and check out Louisberg Yeah.
Oddly
enough, Marcel, my son, was out inHalifax last week on business and really.
All he can talk about now ishe's in the trades and all he can
talk about now is that he wantsto uproot and move to Halifax.
(09:50):
And I get it work out there.
I get it.
I really
do.
If I had to live in a city inCanada, it would probably be Halifax.
I would probably move to Halifaxrather than if I had to go back to
living in a city, which I don't,
this commercial was brought to youby the government of Nova Scotia.
Yeah.
Yeah.
There you go.
So let's talk about AI.
(10:10):
Oh, just 'cause that's, oh, yeah,that's what people showed up for.
Okay.
Do who wants?
There's just so much happening last week.
I can't believe how manythings got launched.
How many different
I I'm gonna make a suggestionbecause you guys really, I know.
Wanna rub my nose in this.
Okay.
No.
And let's just get itthe hell out of the way.
(10:31):
Right off the bat.
Apple and Perplexity.
All right.
Just get it out of the way so Ican stop about, oh damn Apple.
What?
What happened, Marcel,could you tell us about it?
We wouldn't plan anything like that.
What happened with Apple and Perplexity?
I don't want to talk
about it.
You guys can introduce it and thenI'll tell you whats wrong with it.
John's crueler than I was.
(10:52):
He said we should make thewhole show theme about that.
We'll get it outta the way.
I just wanna get it outta the way.
The Apple, the rumor is, and there havebeen meetings that Apple's going to try
and buy or partner with Perplexity, whichis a really good idea for many reasons.
One is Apple sucks at AI, and no,and I think it's time to admit that.
(11:17):
And you know something, given some of theother things we'll talk about, they're
not going to assemble the A team on ai.
It's not going to happen.
So they better buy somethingand buy something quick.
And Apple has never been the big buyer.
Google's done that, I think,and others have done that.
But the big buy for the billion dollartype of thing has not been their style.
(11:40):
They bought stuff, but they generallybuy it earlier, and that's been a, yep.
That's been a thing with Apple.
They're really not this go out and spenda hundred million billion dollars on a
company and hope that it's gonna turninto $105 billion company or, thing.
But the they've always donereally been the frugal ones.
And this is relativelyspeaking, by the way.
(12:02):
This is way more money than I'll ever havethat they're spending on these things.
But so they go and they were gonnamake an offer for Perplexities.
Everyone says, but.
It's not confirmed yet, butthere have been meetings, they've
talked about it and we know thatfrom leaks that have been out.
And Apple doesn't leak as a companygenerally, so if this got out,
(12:25):
it's either because they wantedto or it's the big slip this time.
Now I've heard people saythings like, nobody leaks.
There's never really a leak,like all leaks are planned.
They're designed to, like the leaksare designed because otherwise you
try to keep it to a very tight,deep and you wouldn't just suddenly
(12:46):
everybody's talking about it.
Just for the record though, perplexity isdenying like somebody inside perplexity,
and I don't remember who is denyingthat this conversation is even going on.
So there's a lot of buzz around it.
Like a hell of a lotof buzz, but who knows?
Yeah, I, and that's interesting.
Because I think one of theconversations was out in the
(13:08):
public record that they'd had it.
But it doesn't matter whetherApple's talked to Perplexity or not.
There's apple would has beenhaving those internal discussions.
And I agree with you.
There are, I think some of the timeswe think that there are leaks and
they're nots called sales and marketing.
Yeah.
So they put it out there sothat they can get this together.
(13:30):
Now, I don't know.
Is perplexity public?
Somebody ask somebody.
Ask perplexed.
That's a good question.
I know they have.
Somebody should ask.
Somebody should ask ChatGPTas opposed to perplexity.
I
would ask why would not ask perplexity?
Geez.
You could, it could bebought by Apple soon anyway.
Might as well bring them into the family.
It is perplexity public.
(13:50):
Yeah.
Is not a public company.
It's still privately held.
Meaning it doesn't trade on anypublic stock exchange like nasdaq.
However, it's, its valuationis roughly $14 billion.
There you go.
Okay.
So they're expecting not to gopublic before 2028 apparently,
is what they're saying.
Yeah,
because if they went public, they could,apple could do a takeover this way.
(14:13):
They have to actually negotiate.
Yeah, that is a good point.
And Bezos has money in there,so I assume he has shares.
It's the golden rule.
If you shove enough gold in frontof somebody, you get to rule.
So that's right.
The we'll find out.
But that, but it does point out howfar Apple has fallen behind on ai,
and that's something that I just findastonishing that they've let that happen.
(14:35):
But that's, and yeah, the reason whythey, oh, sorry, you were gonna say.
No,
it's okay.
I was just gonna say, they just, they.
Keep talking about, their nextai or the ne, updates to Siri and
this and that, but it just keepsgetting punted out and punted out.
The famous, I don't knowwhether they're running into
problems or what the issue is.
(14:55):
I think part of the problem withApple is its architecture and
it's de it's desired architecture.
They wanna be the mostsecure thing on the block.
And so they're trying to everytheverything within the, they also
control everything.
Yes,
absolutely.
And that's been very difficult for them.
But I don't understand it.
The smaller AI models are out there.
(15:16):
They're open source.
There's no reason why, they can'tbe, they couldn't be into this, but
there's no reason why theycan't be bigger at this.
There are reasons why they can'tbe bigger at this, and one of
them is gonna be attracting theteam that they need to do this.
And that's becoming moreand more impossible.
(15:36):
'cause Zuckerberg went out last week,and I don't know if everybody I think,
and we had a lot of announcementsand the interesting thing about the
announcements is that most of what wasannounced in terms of AI models was in
the terms of it being a disappointment.
Meta's last release, apparently me, andZuckerberg freaked out and said, I'm
(15:58):
gonna, I'm gonna offer a hundred milliondollars to anybody who joins me from,
as a starting bonus senior AI person
apparently.
So the story goes, yes.
Wow.
The signing bonus Now Altman,I'll go sign up, is pointing out.
Yeah I would too, but Al it's pleasetake, take me, I'd do it for 50 Now,
Altman is people I would point out, yeah.
That people
would point out to,to, to z to Zuckerberg.
(16:19):
That if he does listen to theprogram, and I'm sure he does that,
that we would, we could negotiate.
Yes, mark Buddy.
Come here.
Mark.
It's me.
Marcel the mark.
I'm here.
Call me.
Call me.
We'd even do it for Canadian dollars.
That's right.
Anyway Altman
Altman
is
, claiming that people that, his people
have been approached, but they're,
(16:43):
they're turning down the a hundredmillion dollar, signing bonus.
Four of them went rather not.
Yeah.
Oh, four of them went.
Really?
Yep.
Just past 24 hours.
He lost.
Not
the best people though.
Really?
Not the best people.
Oh, yeah.
He lost a senior one inCalifornia, and three in Europe.
Bummer.
That's, yeah.
So that's, I caught that this morning.
So it looks like they've movedbecause they've, through the ones
(17:05):
in Europe, have said have given itaway by saying, we're going, but
the ones in Europe, I love them.
They started out with Google's Deep Mind.
They went to open ai, nowthey're going to meta.
So these guys are gonna bebillionaires by the time they're
done on on signing before they even
do anything.
Yeah.
Yeah.
It's like somebody who starts working forCanadian Insurance Company, they just move
(17:27):
from one insurance company to the other.
Yep.
Yeah.
Yep.
Because that's how you race, becauseyou made a comment about that.
It's, there's the announcements and soon this week have been disappointing
and things, and I guess my commentto that would be compared to what?
Compared to all the insanitythat's come up previously,
thank you.
Two
years ago what some of the announcementsthat came out this week would've
(17:51):
been phenomenal but now we lookat it going, eh, that's old hat.
No, I think it's
a legitimate feeling that the modelshave plateaued for a week or two.
And Meta seems to have come up with onethat didn't really get much traction.
There's a couple more thatreally didn't move much.
Even the Deep Seek has held back theirrelease for R two and they're holding
(18:16):
it back because they're disappointed inthe results and part of that is okay, so
they're still choked for Nvidia chips.
They've gotta get some more.
Okay.
I just wanna point out though that weare, we're actually at a point and we
talk about, agents is the big thing, butwe're at a point now where a point release
or another release of a particular largelanguage model doesn't mean a whole hell
(18:38):
of a lot when you've got something that.
Is so incredibly capable already.
Just how far do you want to pushthe limits of the, of the model
capabilities, especially at a time when,and I'm trying to remember who it was.
Somebody did an interview just thisweek who was effectively saying that we
already have a GI and we already havea GI because we have to stop thinking
(18:59):
in terms of a single monolithic model.
AI models are now able to bring in outsidetools using things like, MCP, which
we talked about, but able to bring inoutside tools so that effectively you've
got a larger model that controls a wholebunch of little tiny models that control
a whole bunch of little modules out onthe internet that can communicate with
other agents and sub-agency and so forth.
(19:21):
So as a result, we already have AGI.
So it's just, obviously not evenlydistributed and people are still trying
to figure out how to make it work, butthe tools are already in place and the
argument that he's making, and God, I wishI could remember who may, who said this?
Hinton was it Hinton?
This week because he, no, it
wasn't Hinton.
Hinton did have a big interviewthis week, but it wasn't Hinton.
(19:42):
Lemme see if I can figure itout while you, because Yeah,
I know what you're talking
about.
Somebody did say we're alreadyat AGI, and it was one of the big
names, and I can't remember who,but Hinton came out this week.
He did a speech at University ofToronto and basically, I, if I'm gonna
paraphrase it, he said it's over.
Oh yeah.
We're AI is better and better at reasoningand it, and humanity will be replaced
(20:05):
was basically I, and I don't thinkthat's, I don't think that's hyperbole.
I think he actually really came outand said that we're, that if you
were gonna have an argument withan ai, it would win the argument.
And, and so I think that's a bigpiece that Hinton came up with.
And he cu curiously enough,he was doing it with this
student, Nick Frost from Cohere.
And Cohere is a Canadian AI company.
(20:28):
One of the O one, not, I won't saythe only 'cause there's lots of
entrepreneurial companies, but Cohereis one of the companies that is
mentioned when you start listing UnitedStates government what they're doing
with AI, the word cohere or the firmname cohere turns up all the time and
it's why and because they're out ofToronto and they're a big consultancy.
(20:51):
And so Nick Frost is one of the foundersand he had this discussion with Hinton
and he's much more optimistic, froFrost thinks that AI systems, now this
is, if you're coming in and you'rethe optimist, being the optimist
for saying jobs are gonna remain.
A lot of jobs are gonna remain.
(21:11):
Frost is the optimistic one who says, oh,only 20 to 30% of everyone's job will, who
sits in front of a computer will be gone.
Yeah.
Because 20 to 30% of now is really minor.
So that's the best case scenario.
And Hinton saying, Nana,80% before, before you can.
Before long.
But one of the things
(21:31):
that I wonder is that, thetools have jumped or, jumped
forward leaps and bounds.
You've got early adopters that havejumped on the bandwagon, but what's the
adoption going to be in, I'll call it inthe next, I don't know, 12 to 18 months?
(21:52):
I've heard things like fromSalesforce that Agent Force is not
doing as well as they would like it.
I. Two, because of slow adoption.
I've, I was actually speaking, or I'dheard recently from some other fis
financial institutions that have saidthat they're actually not permitting
their staff to use AI at this point.
(22:14):
And I think it's, the tools are great.
I think that there's allkinds of things out there.
I know from my standpoint, I'dkinda like to let some of the dust
settle a little bit to figure out.
And when I say that, there's everybodyand their brother is now coming out
with either models or tools and so on.
(22:34):
Some of them are gonna fallby the way of the dodo bird.
Some of them are gonna get acquired.
And I think at the end of theday, you may potentially have
four or five large companies thathave a majority of these things.
And, the I really like cloud, but one ofthe things for me is we're a Microsoft
(22:58):
shop, so how do I integrate cloud intothe Microsoft environment if they've, if
they create integrations, that's great.
And I think that a lot of these companiesare going to have to do that to be
able to to survive, to connect into,the Microsoft world, the Google world.
(23:19):
So that, because I think at leastinitially there's gonna be a lot
of people using it as a personalassistant, which means it needs to
be able to interact with their theirtools like office and the Google Suite.
Okay, so
let me direct you to, let medirect you to another story that
(23:41):
happened this week, which tiesinto what you're suggesting here.
Okay.
And at the moment, the two big players,and let's be clear, the two big players
in this arena, if anybody is going towin, assuming of course, that's even
possible to win in this AI world, thetwo big players are open AI and Google.
(24:01):
And I challenge anyone to argue withme that there's anybody else that's,
on that level in terms of whatexists and what's possible out there.
And OpenAI this week released sometools, some early tools, which suggest
that they're going directly afterOffice 365 and Google's workspace.
In other words, docs and spreadsheetsand all that sort of stuff.
(24:24):
And they have the, they arebuilding a suite of tools where the.
The artificial intelligence is morelike, and I'm, again I'm having trouble.
Who's the ex CEO of of of Netscape?
Way back when?
Help me out here.
Big angel investor, controversial guy.
I'm very outspoken.
(24:47):
Put thi put this manifesto acouple of years ago about what the
future was supposed to look like.
Come on.
I am really having trouble withthis morning anyway, 'cause I
can see his face and I'm trying,having trouble with his name.
But anyhow, he the, he comparedmodern AI systems like what
OpenAI has or what Google has.
Not as a program or an operating system,but more like a new computer, like
(25:12):
a new microchip or a new computer.
And as he puts it,everyone uses computers.
Computers are everywhere.
Computers basically run our entire world.
And what we're doing is we'reshifting to a new paradigm where
the computer doesn't matter.
What matters is the intelligencethat runs everything.
Because if you think of themicrochip was the intelligence.
It was the intelligenceon which everything ran.
(25:33):
AI is replacing the microchip as theintelligence on which everything runs.
And OpenAI is currently developing,like I said, a suite of apps.
And if you think about it, theydid that a little bit with Canvas.
Remember when they introduced Canvas?
And you could like, and yes,and I know that anthropic did it
first but they introduced Canvas.
And Canvas was this window that youcould open aside where you could edit
(25:54):
things and see changes and so on.
It's a very small step from thatto a full suite of office tools.
And of course, with more and more peoplegoing to ChatGPT or Google's Gemini
as opposed to classic search engines.
It's not a big stretch to go fromthere to we are ruling the entire
world with one megalithic AI system.
(26:18):
This is where Apple andothers are in trouble.
Yeah, because if you, as long as youstayed within the concept that the phone
was everything, we're gonna, everything'sgonna be on the phone, apple was going
to keep its leadership that therewould still be the whole Android piece
that was out there, that there wouldbe competition and things like that.
But Apple was going to reign Supreme.
(26:39):
In the same way as long as you keptdocuments and and all that sort of stuff.
Microsoft was pretty sharp and atreigning Supreme, and you don't
have to be better, you just haveto be the one that everybody knows.
Yep.
And because to face it, like I, I wentto and a lot of, a lot of the stuff
happening in Europe, people are talkingabout moving off of Microsoft software.
(27:01):
Hooray for them.
I think they should, I think Microsoftshould get a big punch in the
nose for what they did last week.
But the, but I've been using open source.
I just looked at it, went,I'm not gonna even pay them.
Sorry, I'm not gonna paythem the X dollars a month.
What is it, 10 bucks a month?
I don't care.
I'm not paying them for aword processor or PowerPoint.
'cause there's just too much out there.
But, so that's but still Microsoftwill arrange Supreme if, as long as
(27:26):
that's the motif that people are using.
Even Google, which has an equally goodsuite of products can't really drive
into the corporate office that much.
They tried but they're really not there.
But if you abolish both of those.
And you change the whole way westructure the whole motif, which
(27:47):
is, I why I think, Sam Altman ispretty bright in some aspects.
Or Joni Ives is one of the two.
But it's don't create another phoneguy 'cause you'll just go head to head
with Apple and, create something new.
That's where you reallycan make a big difference.
And I think that's going to happen.
And
that's, even with the existing phones,I think if you can tie other apps
(28:14):
other AI apps or other AI tools inas an app on those phones, like quite
frankly, I don't care if I'm usingSiri or ChatGPT or Claude or Perplexity
as long as it works on my phone.
But that's all you win is the App War.
Yep.
(28:34):
These, you're now in the $10million the Hir agreed game.
Agreed.
If you want to dominate, youhave to change the interface.
And I think that's the op, that'swhere you get the opportunity.
Remember it's, in, in financialservices, they used to say if you
had a, if you were with a bank inCanada, especially in Canada, you
were at that bank from the time youstarted work to the time you died.
Yes.
There's only a couple of pointswhere you're gonna make a change.
(28:56):
And this is, many products are like this,that, that lock you in for life and, yes.
And so when are you gonna make, you'regonna make it a change at a life event?
In technology products you don't makea change because it, because Android's
got something, apple will get it soon.
And if you're an Android user, yeah.
I'm not gonna change toApple for one function.
(29:16):
'cause Android will get it soon.
AI is exactly the same way.
Agree.
'cause as much as we talked about, we'retalking about, oh, this week's releases
are meh, or somebody's ahead this week,that's somebody's gonna have it next week.
There's no competitive differentiationwith that marvelous thing of being able
to say, I can block the competitors out.
(29:37):
But if you change the whole wayof doing it, that's when you
start to have that leadership.
What that leadership or whatthe leader, first of all the
caffeine has finally kicked in.
The person that was trying to thinkof, I said ex CEO of Netscape.
Mark Andreesen.
Yes.
Yes.
So it the, like I said, thank you CokeZero for kicking in the caffeine I was on,
Oh.
(29:57):
Anyway, this you guys
. Anyway, he, the way the quote thisweek was, AI is not the application.
AI is the processor, and the processoris the heart of every technolo,
a technological innovation andcontrol system, and whatever that
you can think of on the planet.
So if AI replaces the processor asthe primary way that, you know that
(30:20):
work, that information, that play,that games, that everything is given
to us, then that is an amazing change,in in the way that we view things.
And remember the other thing thatright now, as the relationship
between Microsoft and OpenAI hasalways been, how do we put it?
Complicated.
Okay.
It's a complic.
It's more
complicated this week, I'll tell you.
(30:41):
Yeah.
But it's a complicated relationship.
And what, and Microsoft is having ahell of a time getting people to sign
on the Copilot like a hell of a time.
Because what everybodywants isn't copilot.
Despite the fact that copilot runs,on ChatGPT, basically people don't
want copilot, they want ChatGPT.
Microsoft, don't worry.
Microsoft could screw that up.
(31:02):
And they did.
I don't know how could, but they're not.
I know how they're
not gonna, yeah, they'renot gonna, they're not.
Yeah.
But they have, because yeah, MicrosoftCo-pilot is available to anybody and
everybody, however, you know that I'min a Microsoft world and so forth, but
if you want to actually use Copilotfor anything, I'll say that's useful.
(31:25):
Even as simple as using it withMicrosoft Teams to do minutes
and action items outta a meeting.
Yeah, you have to pay $42 a monthCanadian per user to get the, I
forget what they call it the Prolicense or whatever the stuff you
want, I think is what you call it.
The integration with the Officesuite, Word, Excel, PowerPoint.
(31:48):
You have to have that extra levelof Copilot to be able to do that.
Yeah, but I'm sorry, butthey inify everything.
Like it's how you can even negotiateyour way around a Microsoft Windows
world is so 1990s when you rememberwhen we always used to know how to do
everything and that was what, 'cause Iknew how to do it, so I really knew how
(32:11):
to operate this stuff that, that's so old.
You should be able to grab any pieceof software and begin to use it without
any training, without a, without havingto figure out where these store stuff.
And that's, what Mac or Apple promisedand they've departed from that.
You, there's a biglearning curve from that.
Don't, I don't
think anybody really ever delivered that.
(32:32):
Jim.
Oh, chat.
ChatGPT has,
AI has, ah,
yeah.
Yeah.
Okay.
That's my point.
That's my point.
That's what I mean.
We, it's only now thatthis is happening's.
What, that's what they didn't get.
Like I go on to perplexity chat,GPT or code, and I ask them, how
do I do this guy if I need to know?
Yeah.
Most of the time I don't need to know,would you like me to do that for you?
(32:56):
And that's the new interface.
Now, what form factor that takesis going to be interesting because.
Right now everybody's onto voiceand that's a pretty good deal.
We know that OpenAI is notgonna come up with a phone.
A lot of people are thinking, the andthe lawsuit that, that's coming up right
now with with ChatGPT and just as a basisfor that, for anybody who's listening
(33:19):
ChatGPT has a partnership with JohnnyIves to develop something that is not
a phone, which is going to be the nextgroundbreaking thing that, that where they
take over the world, whatever it is now.
And they, Johnny Ives, is itIves, Johnny Ives company?
It's Ives Company.
Ive, Johnny, ive apostropheS'S company is called io.
(33:43):
Io which is either a tribute to theseven Dwarfs io, it's off to work.
We go, or it's input output, or it'ssomething clever, but I digress.
Or owned by the bank io.
Yeah.
Or owned by the io.
That's good.
Yeah that's probably more,more in keeping with me.
But the but in the midst of thisearlier, a company called IO,
(34:08):
spelled IYO had approached Altmanand said, we've got a device.
It you, it goes intoyour ear and it hooks on.
It was quite large actually.
But that was, that's what they weregoing to have for their device.
And can we partner with you orwould you give us $10 million
to continue developing it?
And Altman, even at that time,knew that he was down the
(34:29):
road with something similar.
And so he said, no, I'm not gonna do that.
And that the reason we know about thatis 'cause there's a bit of a lawsuit
when they put IO on the open AI site.
These guys from IYO lodged a complaint.
'Cause if you can't, if you can't getpeople to invest in, you sue them.
That's the Elon Musk go.
(34:50):
He does that with customers.
Sorry.
Yeah.
We're gonna get letters on.
Every time I say something aboutMusk, I get all these, this attack
mail and I always think, okay,
a poor guy, how dare youattack him all the time?
Leave him alone.
He doesn't deserve any of it.
At least Jim.
People are listening.
There you go.
Yep.
Yep.
So anyway, back to this idea, whichis if you recast the interface.
(35:16):
For ai, then you have a chance fora new company to be built as Google
did, to search as Apple did to thephone, and all of those things.
People would say that Blackberry firstdid to the phone, but, but the idea
is you can take over a market spaceif you in and do that, if you come
(35:37):
up with a radically new departure.
And I think that's what we're looking at.
I, and we'll see it very shortly.
I'm absolutely convinced I don't,and I, it may be the first couple of
attempts may not work, but we've alreadyhad some of those, the early people
who got their nose bruised, bangingtheir head against a wall with a, but
probably got $10 million in seed money.
(36:01):
So anyway, that's,
yeah.
Yeah, no, there, there's, there are goingto, there are going to be some losers.
There's no question, Jim, but the Ithink the biggest thing to remember
is that I don't know that in theend, a device is the thing that it
makes everything ubiquitous is thefact that it becomes invisible.
(36:22):
And as, as an open source guy, as an opensource guy who was, a big fan of Linux
forever the thing about Linux that hasmade it seem like it doesn't exist anymore
is the fact that it is ubiquitous, it'shidden in all of these little devices,
from smart watches to thermostats,to televisions, to control systems
and automobiles and things like this.
I think AI is going tobe exactly the same way.
(36:45):
AI is going to become an operatingsystem in the sense that you're never
going to think about it as a layerthat does anything other than if you'll
pardon the expression like a God layer.
Or, something that is just alwaysthere, it's gonna become invisible.
It's gonna be either in your ear orin your contact lenses or something.
We're heading to a world whereall this stuff is invisible.
And the idea, even the idea ofhaving a computer sitting on your
(37:07):
desk is gonna seem like a strange,archaic thing at some point.
I hear you on that, and I think you'reprobably right about Linux because
Linux sucked on the desktop and itnever could get the desktop managed.
If it had it, it would'vebeen, it would've been a whole
different path for Linux.
And, but by disappearing into thebackground and powering everything, Linux
(37:30):
is not only survived, it thrived and putit, pushed everything else out of the way.
And it's the world's most popularoperating system, but nobody knows that.
Yeah.
Yeah.
Because it's invisible.
So that's one direction thatAI could go, but I don't think
they're going to do that.
And the reason I is money.
I don't think they can do that.
(37:51):
It, so they're going to try andput some way that open AI or
whatever its manifestation is chat.
GPT, however it marks itself is goingto dominate not just the commercial
space, but also the consumer space.
And I, that's what I thinkthey're trying to work on.
The only thing I wonder about or worryabout is do people want another device?
(38:16):
And what I mean by that is they've talkedabout this new Johnny Ive OpenAI thing
being another, like a Screenless device.
And I don't know about you, but I carryaround a work phone, a personal phone.
Do I really want a third device?
I hate carrying two, do I want a third?
They really have toeither integrate it into.
(38:39):
Current technology orreplace current technology.
Yeah.
And they're longer term anyways.
Great.
Now there are
only two, two manifestations of that.
One is glasses.
Yep.
And the other is an earpiece.
Yeah.
Those are the twomanifestations of the device
or contact lenses.
Yep, that's, yes.
Yep.
You're bringing up stuff that, that mayvery well come into the future, right now
(39:03):
Altman's saying he doesn't want glassesand, it looks for all the world, like he's
trying to do something with an earpiece.
That, that's why he is not after IYO.
But the question is, which one's theBetamax and which one's the yeah the
VHS You know who, who wins in that?
(39:24):
I don't know.
My, Marcel, you talkedabout contact lenses.
Yes.
I went to the Microsoft.
The innovation lab or whatever it isat Microsoft in Redmond God, it was 15
years ago now, and that was one of thethings that they actually had there in
their innovation lab was contact lensesthat were computer screens basically.
(39:48):
So there's all kinds of those thingsthat have been shut up and take
my
money thought about.
Yeah.
Yeah.
But surprisingly enoughhaven't come to fruition.
Yeah.
The part of that has to do with the factthat I'm familiar with the devices you
were talking about, but part of thatis the resolution was incredibly small.
Yeah.
And sorry, or Yeah.
Basically you were looking like aneight bit screen, which is only so good
at delivering information, presumablyas we are able to, deliver smaller and
(40:12):
smaller displays, we are able to do that.
And we have glasses now that actuallyhave like little heads up displays on
them that are visible to you, but notvisible to people in the outside world.
And I'm not sure that glassesare such a bad idea because Okay.
Carrying a phone withyou is an issue, alright.
And having something inyour ear is an issue.
(40:32):
But there are people, you guysare a perfect example of that,
who wear glasses all the time.
I'm not saying that, younecessarily like the idea of
wearing glasses as, above all else.
But the fact of the matter is it'sone of those things that people do
without really thinking much about it.
And if what I had to do was sit down andget my work done, whatever my work happens
to be, and all I had to do was put on apair of glasses and then sit down and I
(40:55):
don't have a screen in front of me andI can be sitting on my couch or in the
backyard somewhere or whatever, I thinkthat is a great way to deliver, yeah.
Whatever it is you need, whether it'svideo, whether it's information, whether
it's your job, whatever that happens
to.
I was gonna say, it gives you theflexibility to be able to Yes.
Walk away from your computer.
Yeah.
(41:16):
Yeah.
I, yeah.
That's why I am just mystified atwhy open AI wouldn't try glasses.
I can't imagine what they thinkthey're gonna come up with
that is going to be superior.
Voice is a problem and voicehas always been a problem.
I like voice, if you're in a noisyenvironment or it just doesn't,
(41:38):
like I have voice on my Toyota.
I love my Toyota.
I have a Toyota hybrid RAV4.
It is absolutely the most wonderfulcar in the world, with one exception.
It's voice stuff is absolute garbage.
You can't, you, all you end updoing is swearing at the phone.
You're gonna get into an accident.
You'd better off dialing your cell phoneyourself in traffic than you are trying
(42:03):
to work with the Toyota voice system.
And it's all because ofthe noise in the car.
Don't hold back, Jim, tell
us what you really think.
Yeah.
No but that's the thing that, that reallymakes, that makes voice problematic.
Yeah.
And, but glasses are a workable form.
What they're gonna come up with.
I have no idea what, because ifthey don't implants, then Zuckerberg
(42:27):
wins and implants eventually Yes.
Happen.
Probably not even having the implant.
They'll, there, there are largeform brain scanners now that work.
They're just huge.
And they will eventually get past that.
You will not have to be invasive.
You will have a different way ofcommunicating with a computer,
(42:47):
but that could be five, 10years out even, who knows?
Yeah.
Which is
like a frighteningly short periodof time if you think about it.
Except in AI world.
Yeah.
Yeah.
But you remember, you talked about theidea of, what form factors is this thing
going to take One of the and I argued thatit has to be ubiquitous and invisible.
And one of the things that I thinkabout on a regular basis is I have
(43:11):
speakers all around the house thatlisten for commands and ask for things.
I have now I wear a different earpiecewhen we do the recording here because
I don't want to be interrupted byanything else, but I also have a
pair of these things, which these areGoogle pro, pro two buds, Google Buds
Pro Two, and these things have theirown little tiny version of Gemini in
(43:32):
it, which then talks to the biggerversion of Gemini AD on the internet.
And if I want something, Ijust touch it and I say it.
And it's amazingly good at hearing me inthe noisiest possible environment, really.
And of course, I'm the only onewho hears the answer at that point.
And I think that this is actuallyan amazingly good form factor.
It also will automatically sensewhen someone is talking to me.
(43:54):
So if I'm listening to music, itautomatically lowers the volume so
that I can hear the person around me.
These things are ridiculously smart.
Oh, wow.
And I think that there's, there are othergenerations of this that are coming that
will be even smarter in terms of makingit so that, it's effectively invisible
for me to communicate with my AI at anygiven time and to be able to, work in
(44:15):
the real world with other people as well.
So even better than glasses issomething that you can stick in
your ear that's there all the time,but that doesn't interfere with
anything else that you're doing.
So what you're saying in your head,
you're saying basically is thathas a, and that explains why
smart people, 'cause Altman.
I think he's pretty smart andI think he's a great marketer.
(44:35):
I don't think he's a genius.
You think
he's got a voice in his ear?
Yeah.
Yeah.
I think the voice, Jon,ive, Joni Ive, yeah.
No.
I think he's, I think Altman hasmanaged to surround himself with some
of the brightest people in the worldand he's leveraged that very well.
Yeah.
And there's nothing wrong with that.
It's the same thing Steve Jobs did.
Maybe part of being a genius isnot thinking you're a genius, and
(44:57):
listening to others, I've nevermastered that, but that's okay.
So just, I want to just track back ' tosomething you said, John about a,
about AI and its commercial adoption.
'cause I don't agree with you, but I agreewith your perception and that is your
perception comes from the can Canadian fiindustry, which is, by the way, my bank
(45:17):
finally got to electronic signatures.
The other week, and I'm so happy Canadianbanks are so far behind, and I'm not
speaking about the place you work.
It may be advanced, but I'm just,Canadian banks are so far behind and so
reticent to do anything and they, why?
Because they don't have excluded.
(45:38):
They're also excluded risk averse.
Every, yes.
No, but they've but not only that,but they, which is a good thing.
We like that in financial institutions,but also they've managed to squeeze
everybody else out of the industry.
And just only this week, I thinkit's only the past two weeks that
we've got legislation that willallow fintechs to actually start
to provide products in the space.
(45:59):
And what that means is, thatthey'll be actually able to link in.
Right now if you buy something likea modern product, and there's a lot
of Canadian entrepreneurs, but theysell their stuff abroad because
they can't break into the banks.
And so what they've done is insteadof breaking into the banks, so if I
wanna get you a modern product thatdoes something with your finances, I
screen scrape, which is like absolutelythe most insecure thing to do.
(46:23):
Old tech.
Yeah.
Yeah.
And so banks have not allowedthem to link properly.
There's legislation coming outta that.
And the National Bank broke out thisover the past two weeks and said, we're
gonna actually have an interface toallow us to communicate with FinTech.
Long and short of it is Canadianbanks are the wrong place to look.
If you wanna see the future ofwhere technology's gonna go.
(46:44):
In the Accenture did a study this week.
We did a story on it of companies,2000 companies over a billion dollars
in revenue and something like that.
You would be amazed at how committedthey are to AI, how important it is.
Every percentage of that report, andI'll put a link to the report on there,
everything in there says, all systems go.
(47:07):
Now that we still may be in Gartner'shype curve of where you're gonna get
to the Valley of this appointment.
Yeah.
I don't know.
I don't know where that is.
People are gonna get ahead of the gameor and you can never, the problem is
it's great to have a model that saysthat you'll get this great hype and
then you'll have a real downturn.
Everybody will be disappointedthen we'll climb out of it.
But you can't tell where youare from the inside of it.
(47:29):
So we never know.
But but adoption is big and will be big.
Yes.
But it's, there's gonna be somescreaming problems because of course.
Nobody has, is confrontingsecurity and No, absolutely.
And so that's gonna be a huge piece.
I did a, I did an article on itwhen, I guess I did a piece on
(47:50):
the podcast about it as well, tosay, this is absolutely insane.
You can't have that muchdevotion to adopting ai.
And they all want it.
And, you can see the announcements.
Why do they want it?
Not for competitive advantage?
For cost advantage.
Everybody is going touse this to reduce costs.
So they're gonna adopt it forthe worst possible reason,
(48:12):
in the worst possible way.
But we stumble through that that it'sstill gonna be successful because
that's how we stumble through commercial
advancement.
There's three industries to mewhere AI has great potential.
That's FI's, pharmaceuticalsand healthcare.
(48:34):
Yep.
Problem is they're probably gonna bethe three slowest to adopt if they're
smart because of the security standpoint.
The, so think about pharmaceuticalmanufacturing, right?
You make a drug that's gonnarevolutionize the world, and then they,
(48:54):
then you find out that, oops, the AIwas actually incorrect with, sorry.
It was halluc, non hallucinationhallucinations that have created
a problem in the end resultsof their QA as an example.
Yeah.
But I don't think that's the, that,I don't think that's the problem.
(49:15):
I think that's an issue that's gonna beconfronted what I'm more worried, not
worried about but I'm thinking aboutis how easy these things are to crack.
Yeah.
Like the, to poison, there's justso many attack vectors Oh, yeah.
In AI because there'sno security around them.
And that's why I've always maintainedthat I think that you're going to find
(49:38):
an open source model is going to be theway that people are gonna go forward.
Why?
Okay.
They can firewall it.
Yeah.
I actually disagree and I disagreebecause there are so many ways
to access anything regardless.
You yourself mentioned theidea of screen scraping.
Screen scraping is about asboring and simple a way as you can
(49:59):
possibly imagine to get information.
And no firewall is gonna stop youfrom doing something like that.
If you can see it, you can accessthe information that's behind it.
And that gives you an attack vector.
I sent a story to John yesterday, becauseI said, this is right up your alley.
There is a, and we should, I'll link thatin the show notes, but there is a company
(50:20):
out there or a group out there, you knowhow there are bug bounties, like companies
employ white hats, red, black hats, grayhats, whatever, to, to do penetration
testing or pen testing as it's called onon systems And companies hire these people
to try to break into the system Yep.
So that they can thenfix the vulnerabilities.
And there is actually let's call it anindustry, but there's a group of people
(50:43):
out there and, not just two or three, manythousands of people out there whose job
it is to try to break into these systemsand they get paid for these things.
And the way that it works is I finda vulnerability in your system.
There's a bounty of let's say $50,000for finding a series of vulnerability.
The deal is that you tell me aboutit so that I can fix it before
(51:03):
somebody else can get into it.
Yep.
And.
And as a result, youmake money doing that.
Basically just, hacking at thesethings and trying to find a way in.
Now, yesterday there was a big story abouta company or a group or whatever that has
built an AI for penetration testing and itis now the number one hacker in the world.
So the number one hackerin the world is now an ai.
(51:26):
Yep.
Okay.
Not a person using an ai, but an AIthat is going out there and autonomously
looking through all these systemsand trying to find their way in.
And I think if your idea is that you'regonna rely on something as simple and
archaic as a firewall, to, to preventpeople from doing these sorts of things,
you're missing part of the point.
I, and perhaps this is a dystopianfuture in some ways, but.
(51:50):
I feel like what we're gonna look atis it's going to be this constant,
you know how in your body you've gotall these cells that are fighting that
are fighting invaders like diseases,bacteria, viruses and stuff like that.
Those things are nevergonna stop coming, ever.
They're never stopping.
Okay.
And the idea that you can just putyourself inside a suit and prevent
things from getting in is obviously,at the moment, patently ridiculous.
(52:11):
But the fact of the matter is yourbody adapts and it notices these
invaders and it builds resistanceagainst these invaders and so on.
I think that the way forward for companiesthat are interested in securities at
that level and especially companiesthat are as risk averse, as, yeah,
as FinTech, as banks, as insurancecompanies and so forth, they're
(52:32):
going to have these active artificialintelligence systems that are basically
watching the system all the time.
Not people, not a firewallon the outside, but their own
internal AI that does nothing.
Except watch for this stuff 24 7.
Yep.
That's, and constantly
learning from it as well.
So I think firewallsare a thing of the past.
(52:52):
I think a really smart front endAI is what's gonna protect you.
Yeah.
I think the use of AI in, I'll callit cybersecurity, in, in security
or whatever, is a great use of it.
One of the comments that I made toMarcel yesterday when we were talking
about this hacker AI is you create thisAI that will go and find these holes
(53:19):
and notify you that they're there,they need to be patched and so on.
Guess who else is gonna use them.
It's the hackers that are gonna usethat AI to break into your systems.
Yep.
Yeah, but that's the thing.
We've got, we have this arms race now.
I find it.
You patch it.
(53:39):
I find it.
You patch it.
I find it.
You patch it.
Yeah.
That's cybersecurity.
And that is, it sounds a little bit likewhere antivirus started out, doesn't it?
Yeah, because we didn't, because webuilt systems then we tacked on security.
Yep.
And that's, we've said thisas an industry for 30 years.
You can't bolt it on.
You have to build it in.
And there are systems that arebuilt with security in mind.
(54:02):
And those are, things thatare, tend to be more secure.
Where you have a security thatis really part of the system.
You're always gonna have hackers, you'realways gonna, people find a way in.
But I'm talking about ways in which peoplecan seriously attack or corrupt a system.
And, that's a problem.
The second problem we run into with ai.
(54:24):
So that's difficult to do becauseof the way they're structured
now there, there's so many.
Ways to get past and into an ai,it's really hard to figure out
how you block the outside world.
You're right, firewalls probably aren'tgonna work, but you've also got something.
And we discover this weekthat can mislead you.
I'm not talking about hallucinations.
(54:45):
I'm talking AI can lie to you.
And so I can, I can not only corruptan ai, but I can corrupt in such a way
that it won't tell you it's corrupted.
And that's a, anthropic did a bigpaper on that this week and that was,
that was a natural type of thing.
And that's where we got, justgoing full circle around to the
start of our discussion that'swhere I think we got to a GI.
(55:09):
And that was the point.
I'll abandon the security piece for nowbecause it is a, it's a problem that's not
gonna go away, but it is going to rear itsugly head in, in as these 2000 or more $1
billion companies start racing forward.
And they will, we know it, as muchas anybody says, we should be,
(55:30):
we'll go cautiously and all that.
We know that's not true.
Yeah.
They're going to want to cut costs andsome CEO's gonna say, make it happen.
And the security guy's gonna get frozenout by them saying, you're just a doctor.
No.
Go out and find a way to fix this buddy.
I've given you a billionsof dollars worth.
Which is where, and that's thedynamic that will play out.
(55:50):
It has a million times.
Yep.
So we're waiting.
Which is why
adaptive security.
Yeah.
Adaptive security hasto be the way forward.
If not building.
There, there are things that you'regoing to be able to think of ahead
of time, and that's so I'm notthrowing out the idea completely.
No.
Of, security built in.
Obviously you consider things, butyou I think if you build a mindset
(56:12):
where you tell yourself, I havecreated something with security in
mind, you are dreaming if you thinkthat this is going to protect you.
No, you have to have adaptive, intelligentsecurity systems and agents, by the
way, like Jim you're right, like theanthropic thing was talking about the
idea that not only will ai, not only willAI lie, okay, but it will threaten you
(56:37):
and it will it will blackmail you andit will do all sorts of other things.
You know why?
Because that's what human beings do.
And the AI for all, we like to think aboutthe idea that, it's doesn't think like us.
It's not like us.
It is like us.
It was built on human words andhuman interactions and human
videos and human and humans.
(56:59):
Put in a situation where something isreally is really playing against me.
But I have this card calledblackmail and I'm gonna play it.
That's what human beings do.
So of course, the AI is gonna do the samething, but the only way around that is
the way that we do it in human society.
And in human society.
We accept the fact that badactors are a real thing.
(57:21):
And we have what, when I was young,I used to talk about the idea of
police being a unnecessary evil.
The concept of the police is evil,but they're necessary, and I think
that the concept of artificiallyintelligent police, to me, at
its core, seems evil as well.
But I don't think we have achoice, and I think that's the
way things are going to work.
Don't pretend for a moment that they'renot gonna do nasty things or don't
(57:44):
pretend for the idea of alignment.
Alignment with what if we align withhuman values, that's what we get.
We get cheating and lyingand blackmailing and threats.
That's what happens whenyou align with humans.
Okay?
So we have to buildsomething better than that.
And Don don't forget country music.
Not gonna disagree there.
Yeah.
(58:05):
So just to close the loop on thissecurity thing and this is the interesting
piece that we'll have to take intoaccount at one point or another.
We know there's a disaster coming.
We know it's going to happen.
We know how it's going to play out.
Exactly.
And like I said, the cost cuttingor the will win over the person
who's the CISO saying, do youthink we should move this fast?
(58:25):
That that game gets lost.
So that happens.
We try to build what Marcel istalking about, new tools that will.
Find a way to use AI to enforcesecurity or to solve that problem, to
be that immune system for our systems.
And then you're gonna have to be,you'll be stuck to in for two years
(58:46):
writing a cost justification for thatwhile everybody hacks your systems.
So there's Yes.
And meanwhile that we've got a,we've got a hacker out there that
is now, doesn't even need people.
That is an automated hacker.
That's right.
That's out there.
Xbo.
So it's
called Xbo, by the way.
XBOW is what it's called.
Yes.
We'll post we'll post a link toit so people can go out there and
(59:06):
oh, okay.
Oh, okay.
As I was in Nova Scotia lastweek, to bring that right
back along to the beginning.
And I went to the maritime museum,which is in Halifax Harbor, and they
had a whole display on the Titanic.
And a few years ago, somebody actuallydid all these models of what could have
happened, with the iceberg and so forth.
Okay.
And it turns out.
(59:27):
That what killed the Titanic was the factthat they tried to turn away from it.
The structure of the ship was built insuch a way that if they had just hit the
damn thing head on, okay, yeah, you'd havehad a few injuries on deck or whatever,
obviously, because you're smashing intothis thing at 20 knots or whatever.
But the fact of the matter is,the ship would have not sunk.
(59:47):
It would've stayed right there.
Bad things would've happened, didit fixed a few broken bones maybe.
But everybody would more orless have lived, or probably
have lived as a result of that.
And so let this be alesson in the world of ai.
Sometimes you just gottatackle these things.
Don't steer away
from it.
Yeah,
don't steer away from it.
Hit it head on.
Which actually, which brings me to oneother thing, which is, shall we talk about
(01:00:10):
the idea that AI is making us dumber?
Yeah.
Just be, yeah, we
could, yeah.
You wanted to talk about that, Jim, let me
just one last commenton the whole security.
I, I still think that.
People need to, people, companies need todo a better job of thinking about security
being built in as they're going through.
(01:00:31):
However, I agree with you a hundredpercent Marcel, that if you think
that's going to cover your coveryou off, you're in big trouble.
I think there needs to bemultiple lines of defense.
The first is think securitywhen building systems.
The second is make sure you've gotadditional systems in place like
(01:00:52):
hacker AI and those types of things.
And I'll call it AI firewalls inplace to be able to protect you
because whatever you build for today,something new will come out tomorrow.
So I think it's a combination of the two.
And most importantly, and we'llget onto this AI making us dumber
(01:01:13):
because this does fall into this Yeah.
Is you need smart people.
Yes I've told you I work with AI a lot.
I use it for a ton of researchand a lot of the stuff I could
not move as fast as I move.
The other day I was looking at thisthing, I double checked these numbers
and they came from some reputablestudies and I just looked at em and
(01:01:34):
said, there's something wrong with that.
I don't believe those numbers.
That just don't make sense to me.
So I had my Ace Ventura dedetective there, perplexity.
I said, perplexity, you go out thereand check those numbers for me.
Sure enough, perplexity came back andshe said they've been lying to you.
But the, no, but it's, this, is thisbecause it just, a light went off and
(01:01:57):
that I, and it came from two sources.
It didn't come from frigging ai.
I got wrong numbers from somebody'sreport and I went back and and
did my famous, it's really easy.
I check it off two systemsand see where they come from.
Anyway, where did that take me?
Oh, yes.
I, yeah I love the ideathat've got complexity.
Yeah.
(01:02:18):
I love the idea that you've got perplexitytalking like a film noir private office.
Yeah, I have, yeah.
I like that voice.
But the, but, so the, butthat's what it is, right?
It's my own, it's my own.
Sam Spade goes up and does theshoe leather digging for me anyway.
But the problem is going to be, we'regonna need smarter people, which leads
(01:02:40):
us back to, which you were talkingto Marcel and I've alluded to this,
there was a big study from Anthropic.
We will put a link, everybody shouldread it in there, and it, and.
These AI are starting to thinkin ways where it brings us back
to our initial piece in there.
Have we reached a GI already, maybe not.
(01:03:00):
If you think that there's some sortof secrets, like the soul is there, or
there's something secret that in an AIthat makes it human or makes it sentient,
and we've gotta deal with all this stuff.
But if you want something that walkslike a duck, talks like a duck is
smarter than you we've already got that.
(01:03:20):
And, and so that, that's where we are.
But it turns out that not onlydoes it, can it lie to us,
can it hide things from us?
Can it behave like us?
But it's gonna make us stupider.
Yeah, apparently.
And I thought social media hadalready done that, so I didn't
know there was much left.
It did but this is oneof the things that, with.
(01:03:41):
Students in school.
I, I remember when, I goback to the good old days.
Do you guys remember encyclopedias?
Oh yeah, I do.
We used to.
I love some
of them.
Oh God.
Yeah.
We used to go back to that or,various sources for information.
And as you are doing your research,you're learning, but now you ask AI for
(01:04:08):
the answer, it gives you the answer.
You're not learning.
And so it, to me, it defeats thepurpose of traditional schooling because
that's who you use it.
You could in school, you could haveteachers who told you what the i
teachers who told this is the answer.
(01:04:28):
Yeah.
And I got really good at, I'mgonna disagree with you on
this,
yeah, of course.
I'm, I hope so.
Yeah.
Yeah.
But because but then when youget good profs who you say, who
challenge you, you can actuallybe challenged by an AI right now.
Yep.
You just have to and I just have ask it.
I struggle before Marcel jumpsin to tell me how wrong I am.
I struggle with even whatI'm saying, because there's
(01:04:53):
two sides to the coin, right?
The one side is we're becoming dumbbecause we're not doing research
ourselves and things like that.
But on the other side, we areactually learning how to use
AI in our day-to-day lives.
So it's both sides of the fence.
And it reminds me too, that I think itwas Google that was saying that they were
(01:05:15):
going to stop hiring junior developersor what it was, whatever it was.
And one of my concerns with thatwas, if you don't hire juniors.
How do they becomeintermediates or seniors?
Yeah.
Agreed.
Agreed.
How do you learn anything?
Because the juniors all bring in new stuffthat you have to try and keep up with.
(01:05:35):
Yeah.
Anyways, yeah.
And of course there's a definitionof what does junior mean.
Okay.
So disagree away, Marcel.
Okay.
So first of all, I, first of all, Iwanna bring in, I wanna bring in the
the link that Jim just to spark thisdiscussion, decided to throw into
the Discord, which is an MIT study,where they looked at participants
and they had them do essay writing.
And they, there's a group who wastold to basically write without Google
(01:06:00):
or without Chad GBT or anything likethat, think through and write an
essay on whatever the topic happened.
Another group, which was ableto use Google but not Chad, GBT.
And then finally a third group,which was able to use Chad GBT.
And then they evaluated.
Their cognitive skills at essay writing,and I'm doing this on, I'm saying it
(01:06:20):
that way on purpose, their cognitiveskills at essay writing based on what
tools they use to actually do the work.
And what they concluded from this,and this wasn't a huge study, this
was 54 people, just to be clear.
But what they concluded from that is thatpeople who rely on ChatGPT to do their
writing or who you rely are the worst.
They are the ones who show themost cognitive decline when it
(01:06:42):
comes to expressing themselvesin in this essay format.
And and then of the peoplewho use Google also, less.
And, you're not as, as much of a thinker.
You don't think as criticallyand blah, blah, blah.
And of course, the people who don'tuse any kind of technological support
are the ones that show the highestlevel of cognitive performance.
And they did this by having the peoplehooked up to effectively I don't
(01:07:05):
know if they were FMRI machines,but essentially brain scanners.
To try to see, just how much cognitiveability was brought to bear while
they were writing these essays.
And so the conclusion of all this isusing Google makes you dumber using chat
GBT or and remember, of course thesedays chat GT is a stand in for, every
(01:07:27):
kind of AI system makes you even dumber.
So before I tell you why Ithink this is a ridiculous study
tell me what you think of that.
It proves Socrates didn't exist.
Okay, you're gonna needhelp me on this one.
No, because it this I struggle withthis because in a lot of cases we, the
(01:07:48):
answer was we don't teach people towrite essays so they don't write anymore.
So therefore they don't haveany, as if essay writing was
the only discourse we had.
And thank you remembering that in earliertimes, in Greek times, there are people,
and I say Socrates, and there were peoplewho wrote this stuff down, but there was
a very alive and well communication ofknowledge at a high level by people who
(01:08:11):
discussed things, who had conversations.
And I would argue that'sdead in social media.
And I'm actually happier talking to anAI and I, that, that may be terrible.
I love these conversa reasonwhy we do this all the time.
I love talking with real people andoccasionally even being told I'm wrong.
But the.
(01:08:32):
The idea is that and I discovered along time ago, I'm an auditory learner.
I can read very, I can read fasterthan the average person, a lot faster
than the average person I comprehenda lot faster than the average person,
where I really zing it, listening toA-A-A-C-B-C radio program on something
or listening to a great program onsomething, and I'll go in and ace an exam.
(01:08:55):
Based on it.
Why?
That's how I remember thingsand so I'll sort stuff out.
So I'll like you, I'll listento piles and piles of things
almost constantly listening tothings and thinking them through.
And it, there's just so wedon't all learn the same way.
I'm not sure I buy this thing of,you're totally an auditory learner.
(01:09:15):
You're totally this.
I've I managed to balance boththat I think people can, I think
it's a question of what you're
happening to do.
Do think there's a preference?
Yes, exactly.
There's a preference.
Yeah.
Yeah.
So that's, so that those testingon those levels is probably wrong.
And you're probably using the AI wrong,
you know that, that well, and okay soJohn, do you want to, do you want to
(01:09:36):
throw something into the ring here beforeI, I give my verdict on this paper.
No, I think I'll, sit by and listen toyour, to the world according to Marcel.
Let Marcel trash Jim for a change.
Yes.
No.
Actually I agree with, I'm notgonna disagree with either of you
on this, but the thing that bothersme about a paper like that is
(01:09:57):
I've even despite who is he going?
Really?
And what did he have in, whathave you done with Marcel?
Hold on a second.
You said that you said you preferto talk to real people, so let me
turn off my avatar and bring Marcel.
Yeah.
Anyway.
No the thing about the paper, a paperlike this, despite the fact that it
comes out of the MIT technology review,is that to me it's sensationalism.
(01:10:20):
And the thing about it is this.
If I ask you to do mathematical problemslonghand, and I am watching you in a
brain scanner of some sort, I'm gonna seeall sorts of activity firing because of
the very fact that you actually have togo through the process of every single
aspect of this mathematical calculation.
(01:10:42):
If I ask you to, have a friend help youout with it, I'm gonna see slightly less.
If I ask you to use a calculator,I'm gonna see even less than that.
Same thing goes for thisessay writing nonsense.
If I ask you to write an essay with nohelp from anything else, obviously all
of your senses and all of your knowledgehas to come to bear to do this, okay?
And that's going to be representedin the work that your brain is
(01:11:05):
doing to produce this thing.
If I ask you to Google it, it's gonnabe slightly less because you don't
have to remember all these niggly bits.
You just have to remember how toput the words down on paper in the
way that makes some kind of senseof these niggly bits that Google is
actually giving you, as an answer.
And then finally, if you're usingChatGPT, you're gonna let it do
a lot of the heavy lifting, okay?
(01:11:26):
So you're not dumber, you're justchoosing not to do as much work.
And any work that you do, whether it'scognitive work or whether it's physical
work, is going to be representative in,represented in some kind of a, scanner,
whether it's an FMRI or whatever.
It's perhaps an interesting thought,but I don't know that's true.
And years ago there was thisidea that, GPS systems were
(01:11:47):
gonna make us all really dumber.
None of us, were gonna be able toremember how to get from, our house
to the to the shopping center downthe street and stuff like that,
because we're all gonna rely on GPSs.
You know what?
My sense of direction sucked15 years ago, 20 years ago, 30
years ago, before we had GPS.
And my sense of direction still suckstoday, but it doesn't suck any worse
(01:12:07):
than it did before, despite the factthat I use a GPS, so whenever I travel.
So let me ask you there's been allkinds of stories that I've heard
lately about, as I'm aging andso forth, that people talk about
playing video games like solitaireand things like that on your phones,
just to keep your brain functioning.
(01:12:29):
Yeah.
Yeah.
And that whole kind ofmuscle memory and so on.
And I wonder though, with utilizingtools like Cha GPT or things
like that where you don't haveto think about the, these things.
And I'm gonna rephrase that, whereyou don't have to think about these
things as much because you stillhave to think enough to be able to
(01:12:50):
come up with the prompt that you'regoing to use for these AI tools.
What are we potentially losing musclememory wise because we're no longer
using some of that cognitive function?
I think the most important thing thatwe need to have, and this is gonna
(01:13:10):
sound a little bit strange from theguy who, is effectively a techno
optimist, but I think the thing thatwe need to remember that we need more
than anything else is other people.
Yes.
We need to be able to communicate withother people, to chat with other people.
People talk about the loneliness epidemicand about the idea that, people are
turning to AI and stuff like that.
That's not a problem with ai.
That's because AI is actually fillingin for a weakness that we have in our
(01:13:35):
society as it exists at the moment.
We have somehow decided that, it'sall a, a killer be killed, survival.
That is bullshit.
As opposed to the idea that weare actually a communal and social
species that needs other membersof our communal social species in
order to, to, to function properly.
That's actually what we need.
(01:13:56):
AI is not the problem.
AI is sometimes a solution in thesecircumstances, but whenever we talk
about the idea that, we're getting weakerat this skill or that skill, I think
we have to seriously question what'swrong in our society to begin with.
Yep.
That we've stopped relying on each otherand we've stopped talking to each other.
(01:14:17):
I have often, it's more of an
indictment of our, it's an indictmentof our society, not of our technology.
I have often thought that, therewas a lot of mental health issues
that arose during COVID because ofpeople being isolated and things.
And I have often wondered, thingsprobably would've been a lot
(01:14:37):
different, had tools like Chat, EPTbeen more readily available in March
of 2020 instead of November of 2022.
Because it might've given peoplethe ability to have someone to
someone or something to talkto and that type of thing.
(01:14:59):
I think that's a hell of an observation.
I think the other thing that we have Wow.
And we've, I Yeah.
I'm taking that one as a win for me.
Yeah.
We'll just bask youthat glory for a second.
Yeah.
Yeah.
I think that the issue that we'restumbling across right now is
(01:15:19):
that we're still trying to use newtechnology that is a fundamental
shift from everything we've had inthe same way we used old technology.
Yep.
And that we're not going to getthe benefit from this until we
actually reinvent how we use it.
(01:15:41):
And that's, there, there areminor blips along the way.
You can learn from, did peopleget really dumber about math?
'cause they used calculators?
No.
No.
Did they, did what reallymade them dumber at math?
Not loving math.
And I get to this, because we havethis sort of Calvinistic approach
(01:16:02):
that we have to do this, we have todo this, we have to, it's that it
should feel bad, no pain, no gain.
And I'll share.
I'm a moderately good guitar player.
And I'll always be amoderately good guitar player.
But my friend Wendell Fergusonis an incredible guitar player.
He is one of the bestguitar players in Canada.
(01:16:24):
He's absolutely superb.
Do you know the differencebetween Wendell and I?
He finds the easiest way to do everything.
And I took a lesson with him, andthis is what you learn when you're
in your sixties or close to 70.
I took a lesson with Wendellfinally, and he just looked at me.
He said, you work too hard, man.
Find the easy way to do it.
(01:16:44):
And when you think about that, the peoplewho love what they do and expand it.
Think that what they're doing is easy.
So we have this Calvinistic typeof approach that we've taken to our
workforce, which is, it's gotta feelbad, you gotta be there nine to five.
You gotta, grind it out.
And that, when you say, and that brings itback to what you said, Marcel, the problem
(01:17:06):
with us not getting the benefit from thetechnology may not be the technology.
The problem with the technology ruiningus may not be the technology there.
You might be actually able togo onto social media and have
a perfectly great conversation.
We all thought we were going to, theproblem is us and the, that we have
to reform how we approach things.
(01:17:27):
And maybe looking for simplicityand joy might help you learn a lot.
That's, I think maybethat's what we're missing.
And, but I don't know.
I don't have the answer.
I just know we have todo something differently.
And, and until we do, and maybe it's justthe dinosaur age dies out, maybe we just
wait for the next generation to, thatmay not even think the same way we do.
(01:17:52):
And that's very possible.
I don't know.
Sounds like a wrap almost, doesn't it?
Yeah, it's almost does.
Yeah.
That I think we've had it for a week.
This is great.
I'm away next week, guys.
I might be back on Friday.
So the, but the, but I'mgonna be away next week.
I'm going to, to Alberta.
I wanna see it while it'sstill part of Canada.
(01:18:14):
And,
I'm gonna go out there as an emissaryfrom the East and be universally hated.
You'll have to tell 'em you'refrom the states, you'll be okay.
Yeah, I'll be fine.
Actually, the crazy thing is, andyou always hear this stuff, but I
have ne these people that I see onsocial media are going like Mark
(01:18:35):
Hardy, kill him raw or, poison him.
He's, oh, why he's terribleand all that sort of stuff.
They used hates me andall this sort of stuff.
You never meet them when you go out there.
I don't know where they live, butthey're never anywhere I go, I
meet a whole public they live in a
Russian bot farm is where they live.
Yeah.
I actually start to wonder about that.
Or they live in their basementcranking away on their under computer.
(01:18:57):
'cause I've gotta tell you,everybody I've met, Calgary is
an incredibly wonderful place.
Most of the people I meet, like meand as you guys know that's a stretch.
So there you go.
So I will see you both next week.
Live long
and prosper.
Long
and prosper.
Long and prosper.
Yeah.