Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Right now though. Australia has banned the AI chatbot Deep
Seek from all federal government devices, and there are security
concerns over China's foray into the AI market, so the
Aussie government's done it. Taiwan has banned the software as well.
We haven't yet. In twenty twenty three we banned TikTok,
which is Chinese owned. So will we follow suit here?
(00:23):
Should we follow suit here? Well, let's go to strategy
psychologist and AI commentator Paul Diagnan, who's with me. Good evening,
Great to have you on the show. Do you think
what Australia has done is the right move?
Speaker 2 (00:37):
Yes, very definitely, But you need to distinguish between there's
two things happening here. Is slightly complicated. There's basically the
deep Seek software itself and that can be run on
Chinese servers or it can be run on other servers.
So the thing to ban at the moment is it
running on Chinese service. So if you just download the
(01:00):
up on your phone and start using it, all that
information goes to China and they're actually the privacy policy
says that if the government asks them, they'll give them,
they'll give them information. So you certainly wouldn't want public
servants using that and feeding any kind of sensitive information
through to China, So that there's that issue. But separate
issue is if you run it, if we run a
(01:20):
prinstance in New Zealand, and that's an issue of whether
whether the actual software itself is safe. And already Microsoft
is actually running deep Sea on its servers. Because it's
open source software, anyone can do it, so that points
to the fact that probably when it's run on your
own service and New Zeman for instance, then it would
be safe. So you want to make that distinction.
Speaker 1 (01:40):
How do you know the difference?
Speaker 2 (01:43):
Well, well, first of all, you would know, you would
if you well, you'd have to be assured that it
was running on local service. So if the Microsoft one
for instance, In fact, actually the Microsoft ones aren't necessarily
based in New Zone, but if you went through the
Microsoft and they use deep Sea via the Microsoft interface,
and then you would know that it wasn't hosted in
(02:05):
China because they would tell you that if you know
what I mean. So, yes, you fear enough that from
the user's point of view, you may not know. But
at the moment, if the user just downloads onto your phone,
deep Sea and starts using it. It is actually hosted
in China, so that version of it so a special
version that was sort of kerefully secured and looked after.
You'd know that you'd be using it because you'd have
(02:25):
to go to some effort to get into the special version.
Speaker 1 (02:28):
Paul, have you downloaded it?
Speaker 2 (02:31):
Look? I downloaded it, and then I didn't sign on
because I know, no, this is madness. So I've got it.
I actually put it on my phone just I thought, oh,
this is cool. I'm going to use that, and then
I thought no, about to register my name, I thought,
I really don't want to tell because the thing is
I'm the psychologist, right, So there's sort of different types
of information. And the fascinating thing about a chatbot is
(02:52):
you start talking to it almost like as a person,
and in a sense, it can kind of profile. You know,
the Chinese had no interest in profiling, but it does.
You know. It's really as soon as you know all
the different things that someone may ask a chat bop
it starts to it can if someone wanted to get
quite a sort of a personality psychological profile of you.
(03:13):
So I think you need to be very careful about
the ones that you use and you don't use. As
for the average punter, if they want to download it
and they don't care, you know, they don't think the
Chinese are looking very closely at them, they may decide
to do that, but i'd be careful about what you
say to it.
Speaker 1 (03:27):
In terms of what can they look at. I mean,
what you've just said is actually quite interesting you. Some
people might start talking to it like they would a
friend or a partner and tell them maybe some intimate
things about themselves, things that could be used potentially by
a Chinese government. And if you're a federal employee, then
perhaps that's not such a great thing.
Speaker 2 (03:47):
Well, absolutely, they're the disaster of anyone who is foolish
enough to do that. But even even at a more
subtle level, it kind of gets, you know, like at
the moment that companies track what we do, what advertisements
we look at, and what sites who go to, all
that kind of stuff. But really the interaction of a
chatbot is sort of another layer of where if someone
sought to, they could really quite you know, kind of
(04:08):
profile you at a deep, almost psychological level, because they
would know everything that you're interested in, and they would
know you almost your interaction styles, so they could even leave.
So obviously the Chinese government's not going to be doing
that with the average person in New Zealand. And this
is a concern for any any selfware service that's based overseas,
(04:29):
any AI system based overseas. And what this raises us
a question what's called data sovereignty, which is really important.
Speaker 1 (04:38):
Yeah, just before we go, I wanted to just ask
how you got into this because you're a psychologist, but
then you're AI. You've got an interest in AI. How
did you get the two become one?
Speaker 2 (04:50):
Well that most up until now, I've had quite an
interest in it, But up until now it's kind of
been pouring from a psychological point of view. You can't
psychoanalyze a spreadsheet, for instance, can you. The fascinating thing
about AI is it's kind of like this intelligent entity
has come into the world. So as a psychologist, I
find it fascinating because suddenly we've got this whole new
(05:11):
entity and you can actually give it psychological tests, you
know you can. You can you can actually treat it
in a lot of the ways in which we actually
think about people as a psychologist. So that's really where
my interests came from. Plus I've also been involvement software startup,
et cetera, et cetera.
Speaker 1 (05:28):
So have you actually treated an AI bot for like
mental illness or something.
Speaker 2 (05:32):
It's not got to that level yet, but I mean
to it. But but I think, see what I think
will happen. These things are getting more and more powerful,
and you know that you know that what's going to
happen at some stage the the kind of culture wars,
and it's already happening. The culture wars will move into
the type of AI system that people are using. And
(05:53):
we know that Elon Musk is set one up which
has a certain flavor to it, and he says, chat, yep,
it has got a different favor. So we will we
will be sort of analyzing the personality of these ones
and you could kind of pick and choose which one's
more attuned to your kind of worldview. So we will
get to that stage.
Speaker 1 (06:08):
Will people fall and can people no longer do they
no longer need real life relationships? Can they fall in
love with an AI bot?
Speaker 2 (06:17):
This is I think absolutely definitely. This is a fascinating area.
So some people say this is bad and dangerous, and
obviously if you do fall in love with the bot.
What happens with replicas? They in turned all the blots
off and a whole other people have their heart's breaken.
But some people say it's bad, and you can understand
why they say it's bad. But on the other hand,
(06:38):
in modern society, there's a lot of lonely people out there,
and some of them would argue, well, you know, I
don't have people coming around knocking on the door wanting
to talk to me, and I've got this thing is
to me. So if you don't like it, that's your problem.
But basically I'm finding I'm getting on ready quite well
with it. So so as if everything with AI is
kind of a game changer, we're into a new world.
(07:00):
We're we're going to have to think about a lot
of the assumptions that we had in the old world.
Speaker 1 (07:05):
Yeah, goodness me, Paul, you've given us a lot to
think about. Strategy psychologists and AI commentator Paul Dagnan with
us this evening. For more from Hither Duplessy Allen Drive,
listen live to news talks it'd be from four pm weekdays,
or follow the podcast on iHeartRadio.