All Episodes

November 25, 2025 7 mins

Legal GenAI Enthusiast & Speaker and Chair of Women in Technology WA's Janie Plant dropped into the studio to chat.  Lisa asked about the benefits and downsides of A.I. Russell wanted to know if the risks outweigh the benefits. Tune in to hear the full chat. 

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
There is no longer any escaping it. It will find
you AI. Today, we've got Janey Plant joining us as
we delve into AI all this week. Janey, you are
the chair of Women in Technology in WA and that
means you are pretty familiar with AI.

Speaker 2 (00:14):
Yes, I very much him.

Speaker 1 (00:15):
I'm terrified of it. There are benefits, there are risks,
and there are limitations. And this is what I'm hoping
to get from this week, to be a little.

Speaker 3 (00:24):
Less terrified to understand it. The more you understand it,
the less terrified you.

Speaker 1 (00:29):
I don't think I can afford to be terrified of it.

Speaker 2 (00:31):
No, you probably can't.

Speaker 1 (00:32):
There probably are bits to be terrified of. What What
are some of the downsides of using AI in society?

Speaker 2 (00:41):
So I suppose some of the downsides is that it's
not foolproof, so it does make mistakes, and if people
are over relying on it all the time, then that's
going to cause problems. If you are using it to
make decisions that are critical, then that can also cause issues,

(01:02):
so that you have to kind of make sure that
you're using it for the right things.

Speaker 3 (01:06):
Yeah, because I saw a report would have been about
two or three days ago where in China, they had
a panel of surgeons and doctors and the AI made
the diagnosis in two seconds. Yeah.

Speaker 2 (01:21):
So even here at Uwa, I have seen research where
they are like feeding in like scans people that have
got cancers and things like that, and then they're using
AI to actually depict through to find like changes in
their cancers so that they can customize and tailor the treatment.

(01:42):
So instead of kind of going, you know, we normally
do A and then we do B and then we
do C, they might be able to see a particular
patient who they've done A, but in actual fact they
now need to jump to D because B and C irrelevant.

Speaker 1 (01:54):
Now, right, So that comes under the heading of benefits AI.
But what about a lot of people, especially younger people,
are using AI to replace human relationships?

Speaker 3 (02:06):
This cannot be good.

Speaker 2 (02:08):
Yes, no, it absolutely can't, and I suppose it can't.
So it doesn't have feelings obviously, So now I think
I think Sarah was saying on Monday that that's a
kind of a watch this space. But at the moment
it certainly doesn't have feelings. It doesn't have values or
real world experience, so it can't truly understand you're like

(02:29):
a personal context or love, like I can't love your kids,
can't take responsibility in the same way that a human can.
So if you like things like trust, which is obviously
critical for relationships, empathy, ethical judgments, that kind of stuff,
that all sits still very squarely with humanity.

Speaker 1 (02:45):
When your AI boyfriend says how are you and you
say fine, he is not going to understand the subtle
are what fine means?

Speaker 2 (02:56):
So that's actually that's actually really interesting. Just when Russell
said cannot read the time, I mean you can ask
it to read the tone so you can say to it,
you know, I want you to you know, like like
my tone is annoyed or something. You can kind of
give it like paramative one says that no, so you
have to kind of yeah, so you are literally kind
of giving it context.

Speaker 3 (03:17):
It's really back, yes, back to the risks to get
you know what, what about in a critical situation where
AI makes a mistake, and so we were talking about
the medical side of things before, who's responsible?

Speaker 2 (03:29):
So that's an excellent question. So at the moment, we
don't have any specific regulations or laws in Australia. We
do have some guardrails and guidelines, so we've got the
government's eight Ethical Principles, and then there's a National AI
Framework and so on and so forth, but none of
those are like compulsory. But in any case, even without

(03:52):
those specific laws, what we do have are laws that
generally apply across the board. So we have copyright laws,
have privacy laws, we have anti discrimination laws. You know,
all those sorts of laws. They all apply to everything anyway.
So you know, it's no AI is a tool. It's
no different in that way to any other tool that

(04:14):
you're using. So you need to make sure that it's secure,
that you're not you know, like leaking people's private data
all that kind of stuff.

Speaker 1 (04:21):
I've always been curious in as sort of a chicken
in the egg situation. AI. They say it's artificial intelligence,
but doesn't it always sort of have to be programmed
very very originally somehow by a person at a human
So when does it sort of you know, people say
it's going to take over and stuff, but don't we

(04:43):
have to make it do what it does?

Speaker 2 (04:46):
So initially it's basically based on data sets. So some
of the AI tools that you would have heard of
based on a data set. And we'll just call it
the internet because we're not certain exactly what it's based on,
but obviously, because the Internet is based on humanity, then

(05:09):
obviously there's going to be problems with what you kind
of get out of it, and so things like bias
and you know that kind of stuff is going to
be sort of an issue. But what it does after
that is you can sort of use subsets. So if
you're an organization and you want to be able to
use only your own data, you can definitely do that right,

(05:29):
and then it's really just about what you've got to
make sure your data is, you know, cleansed, like it's
not just a bunch of rubbish, you know, that kind
of stuff, because obviously it's that garbage and garbage out
kind of thing. So you can kind of isolate the
data that you're using, which is what a lot of
like specific organizations would do. But if you're just kind
of looking at the Internet, then yes, you need to be.

(05:51):
If you're looking at sort of tools that are just
kind of using that massive data set, then you do
need to be sort of cognizant of what some of
the issues can be around that. But it does learn
from things that you do, and it does learn from
the content that people are putting in. So if you're
if you're typing things into it, it is learning from that.
If you haven't told it that, you don't want it

(06:12):
to learn from that.

Speaker 3 (06:14):
I guess given all that, and you look at all
the sides at the moment, would you say the risks
outweigh the benefits. What is it? Is it good? Is
it bad?

Speaker 2 (06:24):
So I would say that I am optimistic. I heard
someone say the other day they are cautiously optimistic. I'm
not sure if I'm cautiously optimistic. I am optimistic. I
think that we do have a long way to I
think education is a really big part of it. We
need to make sure that people understand it. They understand
the risks, the nuances, how to use it properly, what
not to use it for, when to rely on, and

(06:46):
so on and so forth. So as yeah, absolutely so
as a lawyer, one of the things that we're seeing
is people turn I mean doctors would have had this
previously with doctor Google, right, So we have people turning
up and going, oh, here's my legal advice, and it
sounds authoritative, and so you've got to kind of say yes,
but in your situation, it doesn't. You know that doesn't
actually apply, and it can be quite tricky to sort

(07:09):
of get people to understand that, you know, the legal
advice that you're actually getting from a lawyer is totally
different and well worth the money that you're paying for
it to what you can get from you know, chat,
GPT or something like that.

Speaker 1 (07:23):
We shall continue to watch this space. Then, thank you
Jamie today.

Speaker 3 (07:28):
Helping us on our journey to understand the brave new
world of AI. It's coming for us. You can run,
but you can't hide.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.