Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:12):
Hi, I'm Claire Murphy. This is Mumma MEA's twice daily
news podcast, The Quickie. It's coming at us, at work
and in our private lives. We're being told use it
now or get left behind. We're being told it's the future,
so get on board. But what AI tools can we
actually trust?
Speaker 2 (00:29):
From?
Speaker 1 (00:30):
AI assistance like chat, GBT, Gemini and deep seek to
video generators, note takers, research tools and search engines, email organizers,
and ones that take topics and turn them into a podcast.
AI is infiltrating our daily work and home lives. So
today we're going to find out which AI we can
safely use and why one in particular has been deemed
(00:52):
too dangerous for austraining government devices. But first, here's the
latest news headlines. Wednesday, February twelve. The jury in Matilda's
captain Sam Kerr's racial harassment trial has found her not guilty.
Kerr walked from court overnight with a smile on her
face after facing prosecutors whod accused her of course distress
to a police officer who she called stupid and white
(01:13):
on a night out where a taxi driver had refused
to let her and fiancee Christi meus out and instead
took them to a police station after one of them
had thrown up. The pair believed they'd been kidnapped. Kerr
explained to the court how her behavior and obvious intoxication
wasn't a great look, but that she believed she was
being treated differently due to the color of her skin.
The officer, not believing she wasn't the one who broke
(01:35):
the back window of the vehicle for them to escape.
A jury returned the not guilty verdict after four hours
of deliberation. Kerr gave her lawyer a thumbs up as
her partner, Christy Mewers, burst into tears in the gallery.
Getting a home loan is about to become easier for
those saddled with student debt after financial regulators promised to
update their guidelines on lending restrictions at the request of
(01:55):
Treasurer Jim Chalmers, ASIK and APRA have agreed to clarify
their guidance to lenders, which will include reducing serviceability and
reporting requirements for hexteads, meaning banks can exclude the debt
if they expect the borrower to be able to pay
it off in the New Ark future, and they recognize
that the size of a person's HEX repayment depends on
their income. Doctor Chalmers also said they're working to make
(02:16):
sure those with HELP debts are also treated fairly when
they want to buy a house. The banks have welcomed
the proposed changes, saying they were unsure how to interpret
the existing requirements that were essentially holding them back from
lending to some people with HEX debts, saying because HEX
is paid pre tax and the bank assesses serviceability post tax,
it was essentially a form of double counting. The Australian
(02:37):
Government may end up owning Regional Express Airlines if a
suitable buyer can't be found. REX administrators have created a
short list of potential bidders for the airline, saying they're
committed to finding an owner who will continue to provide
an ongoing and reasonable level of service to the regions,
saying that commitment also includes value for money and good governance.
If those bidders fall through, though, the government will work
(02:59):
on a contingency plan, including one that will see the
Commonwealth acquire the airline itself. If REX becomes nationalized, it'll
be the first time an Australian federal government has owned
an airline. Years after Quantus was privatized in nineteen ninety five,
new research has found that ozzie teens aren't just lazy
wanting to sleep in every day, their internal body clocks
(03:19):
are actually changing. The research, conducted by the Royal Children's
Hospital in Melbourne showed a teenager's body clock or diona
rhythm shifts during this time of life. The hormones more
stimulated into the evening, wiring you to stay up later,
but teens still require a lot of sleep as they're
growing and developing in their brains are changing, so they
end up needing that sleep in despite staying up later.
(03:41):
Doctors doing the research also noted that things like inconsistent bedtimes,
screen use, and caffeine are not helping teens get the
rest they need, with forty two percent of children having
a problem with their sleep patterns. That's your latest news headlines. Next,
how much do we tell our new AI assistance about ourselves?
And what do we need to know before we start
relying on them for everything we do? Are you using
(04:08):
AI at work? Yet? Maybe you're using it for personal reasons. We,
for example, here at the Quickie, use perplexity to help
us with research and chat GPT to brainstorm creative ideas.
Some of our colleagues use notebook air lam to turn
long text into fifteen minute podcast to better absorb the information.
Friends use chat GPT to make their grocery lists or
dinner plans for the week. Safe to say AI has
(04:31):
well and truly arrived in our lives since really hitting
its straps in twenty twenty three twenty four, But can
we use these without potentially putting our data at risk?
The Australian government recently banned Chinese AI chatbot deep Seek
from all government devices and systems, citing significant national security
concerns and data privacy risks. The ban, which took effect
(04:52):
earlier this month, requires all government entities to immediately remove
deep seak products and prevent their installation on official devices.
Home Affairs Minister Tony Burke said that deep Seek poses
an unacceptable risk to government technology. Following the federal directive,
most Australian states have also implemented similar bands. Queensland, Wa,
(05:13):
the Act and the Northern Territory have now joined South
Australia and New South Wales in prohibiting deep Seek's use
on government devices. Tasmania is still considering its position based
on security advice. The major concerns about deep Seek is
its extensive data collection practices and the potential exposure of
sensitive information to foreign government control. Cybersecurity experts have worn
(05:36):
that deep Sea collects data like keystroke patterns that can
identify individuals and potentially match work searches with leisure time activities.
Now this level of data collection, combined with Chinese data
sovereignty laws falling under Chinese government jurisdiction, meaning it can
be accessed by Chinese authorities at any time, has raised
significant red flags for national security officials. There are also
(05:59):
concerns about deep Seak's ability to protect its data, suffering
a serious breach at the end of January that saw
more than one million sensitive records exposed. Though is different
from other AI tools like Chat, GBT Perplexity in Gemini,
its open source and uses fewer resources than its rivals.
It was also made much cheaper than the competition, putting
(06:21):
the industry into a spin when it was revealed that
you can in fact power AI without the need for
overly expensive hardware. The Chinese government has strongly criticized Australia's decision,
characterizing it as the politicization of economic trade and technological issues,
and Beijing maintains that it has never and will never
require companies to illegally collect or store data. Chinese State
(06:44):
media has also accused Australia of ideological discrimination, suggesting that
if security concerns were genuine, similar restrictions would apply to
US based AI companies too. In an interesting twist, it's
emerged that one of the pivotal contributors to creating Deep
Seek studied computer science right here in Australia, and this
connection highlights the global nature of AI development and the
(07:06):
complex relationships between technology, education, national security. The Deep Sea controversy,
though racist questions about the trustworthiness of all AI platforms
in general. Well popular tools like chat, GBT, perplexity in
GEM and I continue to operate freely. Experts warn about
sharing personal information with any AI system. Doctor Nickiswaney is
(07:28):
the founder and director of AI her Way Nikki. There
are so many tools we can access, now, who do
we trust? Are there any that are more trustworthy than
the other?
Speaker 2 (07:39):
Oh? I don't know whether to answer this personally or professionally. Yeah,
but I think when we got to analyze what products
we're using it and why. I guess the first thing
to remind people is that all of these are commodified products, right,
so I often liken it too. When you go into
a casino, they put right lights everywhere, there's lots of
flashing colors, there's lots of distractions in entertainment. There's no windows,
(08:00):
so you can't tell how long you've been in there,
and the whole point is to kind of trap you
in there for as long as possible, because the longer
that you're in there, the more money you're likely to
spend and hand over to business, and that's good for business.
So when we use AI models, as yet, we don't
have truly free, truly open source, sovereign options. So what's
(08:21):
important to remember is any model that we're using, it's
a product, and they'd like us to use their product
for as long as possible because we either paying with
money or we're paying with data, and both those things
are a real gold mine to any business that controls
these AI models. That being said, we do have global
security concerns over something like deep Seek, partially because of
(08:43):
who owns it, So we obviously have some sort of
global tensions with security in China, we saw that through
the kind of TikTok debate, and Deepseek is owned by
the people who make TikTok as well, then we also
have the sheer design of deep Seak. So deep Seek
is made on slightly lower quality chips and with a
little bit less reasoning power, and just through the virtue
(09:04):
of how it's been designed, it is a little bit
more prone to data leakage. People should have awareness of
which model they're using. You should be using models where
you control whether your data is being shared with the
people who own that AI model. And then beyond that,
the US owned models and UK owned models probably are
a bit safer. But I don't want to make people
(09:26):
feel like it's downright safe, because, like I said, they
want your data and that's something that you've got to
think about, regardless of who you're using.
Speaker 1 (09:34):
You kind of brought up there. We are concerned about
deep Seek for several reasons, but one of those is
the fact that it is owned by a Chinese company
who are directly linked to the Chinese government. Why are
we scared of China having our data but not scared
of oligarchs like Mark Zuckerberg having our data? For example?
Like what's the difference between Mark Zuckerberg having it or
(09:56):
the Chinese government having it.
Speaker 2 (09:58):
Yeah, there's a little bit of social conditioning, I think,
so we just inherently have a little bit more kind
of allegiance with states like the US and UK. Whether
or not that's completely correctly placed, I think is more
of a kind of individual choice. There's just a bit
more kind of mystery behind what happens to data inside
of China. There's a little bit more ambiguity between how
(10:19):
much control the government have, how much access they have,
and also what they choose to do with that, and
because they're not as aligned on you know, we've seen
issues with human rights on a global scale, there's equity
issues and security concerns. So it's also just because it's
a little bit more of a black box, we have
a little bit more mystery around how that's data is used,
and that comes into the kind of security concerns over
(10:42):
these US models. That being said, the transparency over what
the US does with data, I think, gosh, the inaugration
really kind of shine a spotlight on how close these
tech companies are to the government now in the US,
and I think that's also worthwhile watching over the next
kind of six to twelve months of how those power
dynamics play out.
Speaker 1 (11:03):
Are there any Australian based AI systems that we could
potentially look at Nikki i'm in i Canvas, for example,
it's based in Sydney and they have an AI model
for their graphic design business. Are there other Australian models
that we might be able to look.
Speaker 2 (11:17):
At not that compete to the same level. So there's
a few articles and there's definitely a push right now
that Australia needs an Australian owned AI. I just came
back from a forum about AI use for developing countries.
It's the same thing world over. So we have this
huge issue right now with data sovereignty. We know that
AI is really important to efficiencies and productivity, and we're
(11:38):
not just talking about rewriting your email right when we're
talking about AI to global scale, we're talking about world
economies and how much power and access to influence we
all have at the moment. We do have this real
issue with the fact that there is not a lot
of autonomy over who owns these AI models and how
we can make them fit our own purpose, our own culture,
our own ethics, our own idea of what these models
(12:01):
should achieve. So it's definitely something we need to address.
It's definitely something that the government are talking about. It's
definitely something we've seen with the UK announcements. We have
nation now throwing money at the problem, so to speak,
to try and own these models. The thing about deep
Seek is that it showed us that you can make
AI models much cheaper and much quicker than we ever thought.
(12:23):
So whereas the US companies, you know, Open Ai, Google,
they're on track to spend about a trillion dollars on
AI investments and deep Seek reportedly was made for about
six million dollars. So the one thing that deep Sek
does do is it models to us that perhaps this
is much more possible than we thought. And that's something
that is exciting for places like Australia and especially developing
(12:44):
countries around the world because it means that they might
have a choice about actually making AI that serves them.
Speaker 1 (12:50):
Can we talk about AI search engines, in particular the
ones that people are using the most to chat, GPT
and perplexity. Anna sent us an email which kind of
links into this, so she was trying to Google. She
gave us an example she was trying to google nice
navy blue blazers. She wanted to buy one. She said,
the only thing that kept coming up with the same
(13:10):
list of retailers, So the big guns those that were sponsored,
and there was no smaller designers. There was no independent companies.
It was all the same people over and over. And
she said, is there a better way to enable true
search of the Internet? Is this where AI comes into play?
Can that help us better search things online?
Speaker 2 (13:30):
It can actually if you know how to use it.
So one thing is that companies like Google, which have Gemini,
Now when you search in Google, it comes up with
an AI summary, And we have to remember that AI,
by default, it's a big data algorithm, and algorithms are
always about looking for patterns in data. So when we
ask AI models, whether that's through Google or directly accessing
(13:52):
something like chat jeep team, we say hey, can I
have a blue blazer? It's going to look at everything
it has about blue blazers and give you what it
most likely anticipates that you want, which is always going
to be the run of the mill, the stuff that
is most cited on the Internet. So that's why you're
going to get your chainstores or the same few big retailers. However,
the power of you as a consumer is that you
(14:13):
can design the instructions that you give. So if you
instead say I'm looking for a blue blazer, I only
want you to give me boutique handmade options in Australia,
It's then going to use that extra information to go
and filter for those sources in its own data bank.
So you can absolutely control a little bit more around
what you get. But you have to understand that you're
in the driver's seat and you have to be very specific.
(14:35):
If you're not specific, it's always going to assume that
you just want whatever it's got the most amount of
data about. And that goes for blue blazers, it goes
for writing marketing messages, it goes for coming up with
suggestions for birthday presents, whatever you're using AI for. Unless
you're really specific and you force it to look at
the kind of fringe data sets, it's always going to
go middle ground.
Speaker 1 (14:55):
Can I ask you about that AI summary that Gemini
Google gives you? Now, what's the benefit to Google to
keep you inside Google itself rather than click links that
it suggests?
Speaker 2 (15:05):
Data? Right? Data is the new goal. This is a
new global currency. We care less and or less about money,
and we care more and more about data. So by
having you stay inside of Google's system every time you're
searching for something, so in those summaries, they do actually
hype a link to where they get that information from.
So if you're clicking a link within that summary, that's
(15:28):
providing really valuable data about how people behave, what information
they validate, what they like, what they don't like, what
they find interesting, what keeps them on there longer. It's
just all about data, and that's a really clever way
to kind of feed us something that's helpful to ask
and that gives Google slash Gemini more information about what
we want.
Speaker 1 (15:45):
Just finally, Nikki, I want to get your opinion on
what might be the most trustworthy AI assistant that is
out there right now. So the big four that people
generally would know about is Chat, GBT, Claude, Gemini, and
Deep Seek. Out of those four, who do you trust
the most?
Speaker 2 (16:03):
I would have to vote Claude. As a personal and
professional opinion, this is often the system that we go
with when reality and ethics matter the most. Claude by
default does not share your data, so they don't take
what you're putting in there to use it to train
their next models. And Claud's actually made by two people
that were at open ai in the early days and
left because they felt like open ai was not ethical enough,
(16:24):
and they started their own company, and they delayed releasing
Claude even beyond the point that was probably ready because
they wanted to do extra safety testing and they were
concerned around how it behaves, So they missed out on
potentially millions of dollars because they delayed that launch. To me,
that speaks of the highest concern around how these models
(16:44):
behave and what they do for us. So if I
had to pick one of them, I'd go with Claude.
And as a last tip as well, a good way
to think about it. Right, we can't avoid using AI,
but if you wouldn't post it on a Reddit forum,
consider whether you really need to be sharing it with
AI buy and large, it's going to keep you pretty safe.
So yeah, until we get clearer options and until the
market becomes a little bit more diversified, you wouldn't post
(17:07):
it on Reddit, don't put it into an AI model.
Speaker 1 (17:09):
So the basics for using any AI platform are never
share sensitive personal information like financial details or account credentials
like passwords or usernames. Be aware that once data is
provided to AI systems, you may lose control over where
that data goes and who can access it. And AI
tools learn and improve by analyzing your data, which could
(17:32):
then be used for targeted advertising or potentially accessed by
unauthorized parties, So be careful how much you give it
to work with. Thanks for taking the time to feed
your mind with us today. This episode of The Quickie
was produced by me Claire Murphy and our executive producer Taylorstrano,
with audio production by Lou Hill.