All Episodes

June 1, 2023 20 mins

An organization in charge of a hotline to help people with eating disorders finds out that chatbots aren't a good substitute for a human operator. A judge in Texas explains that generative AI has no place in his courtroom. And Meta and Amazon both face some challenges around the world.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to Tech Stuff, a production from iHeartRadio. Hey there,
and welcome to tech Stuff. I'm your host, Jonathan Strickland.
I'm an executive producer with iHeartRadio. And how the tech
are you. It's time for some tech news for June
of first, twenty twenty three. How did we get to

(00:26):
June already? Yikes? All right, it's another AI heavy news
episode because you know that's what's going on out there.
You might recall that on Tuesday I talked about Stephen Schwartz,
the lawyer who submitted a filing in a legal case
that contained false information courtesy of chat GPT. He didn't

(00:48):
do it on purpose. Chat GPT just gave him information
that was not true. Schwartz had used chat GPT as
part of his legal research in a case, and the
chat bot invented some cases that never actually existed as precedents.
So Schwartz did actually think to ask chat gpt if
the cases were real, Like there's a there's an exchange

(01:10):
where he said, hey, is that a real case? In
chat GPT is like yeah, yeah toats toad's real. So
it turns out you just can't trust a chatbot. This,
by the way, is why. I'm really concerned about these
chatbots being incorporated directly into things like web search, you know,
with Microsoft and Google both rushing to do that. I

(01:32):
think it's a mistake because of multiple reasons, one of
which is this tendency toward hallucinations. Now Schwartz is awaiting
a hearing that will happen on June eighth that will
determine what, if any, sanctions he will face as a
result of this goof them up. Meanwhile, in Texas, a

(01:54):
judge named Brantley Starr has made it clear that he
will not abide any AI chat bought shenanigans in his court.
He said that attorneys in his courtroom must promise that
quote no portion of the filing was drafted by generative
artificial intelligence end quote, or if any part of the
filing did involve generative AI in any respect, a human

(02:20):
being must have checked the information and verified it to
be true and accurate. This covers pretty much anything lawyers
would submit to the court, and I think it's an
excellent idea. As we saw with Schwartz, AI just it's
not trustworthy. It can make stuff up in some cases,

(02:41):
and honestly, this step is good for everyone in the
legal system. In the long run, I wouldn't be surprised
to see other judges follow suit. Open AI, meanwhile, is
trying to address this troubling problem of AI hallucinations. So yes,
in case you forgot what halluci nations mean with regard

(03:01):
to AI, it just means an incident in which AI
invents information, such as the fake legal cases cited in
Schwartz's situation. And it's not that AI is a pathological
liar or has some sort of motivation to give us
the wrong information. It's more like, when this AI gets
into a situation where it does not have all the

(03:23):
relevant information, sometimes it just makes stuff up in the
absence of reliable info. To me, it's kind of like
if you've ever had a friend who just seems incapable
of using the words I don't know the answer to that,
then you probably feel like this is a very familiar situation, right.

(03:44):
Just think of someone who, rather than say I don't know,
that's interesting, I don't know, they say, oh, it's probably
because or maybe they don't even you know, try to aculivocate.
They just outright say something they think is probably true
and they don't know. One way or the other. It's
kind of what open ai says is happening with chat GPT,
actually what a lot of AI experts say are happening

(04:07):
with generative AI in general. And so now the company
is saying it's going to revisit how AI works toward
creating an answer. So right now, the model apparently follows
a process called outcome supervision, in which the goal is
just to get the final answer. It doesn't really matter
what pathway you took to get there. The ends justify

(04:30):
the means. In other words, so the outcome supervision is
just the AI gets a reward if the answer it
provides at the end of the day is correct. The
problem is that when AI makes a mistakes, say early
on in the process, this can have a much larger
effect further on in the process. Like if you've ever
put something together and you made a mistake early, you

(04:53):
might realize that by the time you're getting toward the
end that small mistake has created a situation much further
in the process. That is a huge problem. Well, so
is the same with AI, And so open ai is
saying they're looking at changing over to process supervision in
which you know, reward stages for the AI occur throughout

(05:14):
the reasoning process, so this thought is supposed to less
think that you know, AI would reward itself every step
along the way as it made the right choices, and
thus would reduce the possibility of making a mistake. Further
down the line, critics argue that it may not make

(05:36):
any difference at all to the amount of misinformation or
just fake information that is generated by AI. It might
not matter what process it uses, but rather what's more
important is that AI operates with a lack of transparency,
so it can be really hard to pinpoint where a
problem starts because you can't actually see what the processes.

(06:00):
And if you can't see what the process is, it's
very hard to diagnose where the problem is popping up.
And so the critics worry that this change in method
won't actually solve the problem of AI creating incorrect responses
and misinformation. Over in Italy, a senator named Marco Lombardo
stood before Parliament and delivered a speech about Italy's agreements

(06:23):
with Switzerland, and that does not sound particularly techi except
at the conclusion of the speech, Lombardo revealed that the
speech he read out was not written by a human
being instead it was generated by AI, and further, Lombardo
said he did this in order to prompt a larger
conversation about AI and to really consider what it can

(06:48):
do and the potential consequences that can occur if people
misuse it or if the AI does not perform as expected.
Italy has been one of the more proactive country to
consider AI and to be critical of AI. Previously, Italy
banned chat gpt, although only temporarily, and did so because

(07:09):
of concerns that information shared with the chatbot would not
be secure and thus violate citizen privacy laws. And we've
seen with chat GPT in particular that the history of
chat ended up becoming open right. People were suddenly able
to see what other people had been talking about with

(07:30):
chat GPT. So I think it's an understandable concern along
with privacy, but also it's good to see governments having
these discussions to really seriously talk about how to think
about AI in order to make the best use of
it and not have it create problems. Italy's approach has

(07:51):
motivated the EU in general. Lawmakers there as well as
some of the United States, have indicated that they are
now working on a code of conduct regarding AI and
AI companies. Representatives from the EU and the United States
met at a Trade and Technology Council meeting and started
to talk over this code of conduct, which would be voluntary.

(08:15):
On my mind, a voluntary code of conduct is a
little bit on the weak sauce side. I get that
optics can be bad if an AI company refuses to
adopt the code of conduct, and that might mean that
the company would find it difficult to do business with
customers if they refused to sign on to this code

(08:36):
of conduct. So there might be some social pressures and
business pressures to do it, but it's voluntary. Considering the
potential risks associated with AI now, I am not going
so far as to claim there's an existential threat level
risk out there, but there is risk, and that's bad enough.
But considering that risk, I think we might need more

(08:57):
than just a voluntary code of conduct in order to
keep things in line. Also, it'll be interesting to see
what role various AI companies and open Ai in particular,
will take in helping draft this code of conduct. You know,
Sam Altman, the CEO of open Ai, has tried to

(09:17):
get in front of this stuff and It makes me
worry because if the people who are the subject of
a code of conduct are actually allowed to write or
at least influence the writing of that code of conduct,
you can end up with rules that don't actually guard
against anything at all, and then it's just optics and

(09:38):
that's useless. And for a horrifying story of how AI
can be harmful, Chloe Shang of Motherboard has an article
titled Eating disorder Helpline disables chatbot for harmful responses after
firing human staff. Yeah, the headline, that's a lot. So
the story here is is that the National Eating Disorder

(10:02):
Association or NDA made a plan to replace human operators
of a mental health hotline with a chatbot called Tessup.
So I guess the idea was that the chatbot would
be more efficient and cheaper than keeping human beings who
have expertise and experience and you know, empathy on the payroll.

(10:25):
But you know, this is not a helpline you would
call if your lawnmower stopped working, Like, I can see
a chatbot being used for something rather mundane like this.
This is a helpline design for people who are dealing
with eating disorders. Union representatives have accused ANYDA of using
union busting tactics and warned that relying on AI could

(10:47):
lead to terrible situations. And earlier this week, a social
media post about how this AI chatbot led to a
terrible situation went viral. So, first up, there was an
activist named Sharon Maxwell. I had to test out Tessa,
and she said that quote every single thing Tessa suggested
were things that led to the development of my eating

(11:08):
disorder end quote. So by that, what I think Maxwell
is saying is that she had previously developed an eating disorder,
and the thoughts that went into her head that led
to her developing this eating disorder were the exact same
things that this chatbot was now suggesting as advice. So,

(11:28):
in other words, Tessa was giving Maxwell the advice of, Hey,
maybe you can lose a pound or two by doing this,
this and this, And when you're trying to make that
suggestion to someone who's dealing with the eating disorder, that
is a very dangerous thing. And then a psychologist named
Alexis Conason, and my apologies for the pronunciation of your

(11:51):
last name, I'm sure I'm getting it wrong. She conducted
her own test and found similar results. So what was
Anyda's response. Well, initially the organization accused Conison of fabricating
the whole thing, so she sent screenshots of her conversation
to an Eda, and then not too long after that,
Anyda took Tessa offline in order to address some quote

(12:14):
unquote bugs in the program. While Tesla has guardrails that
are meant to keep the chatbot from doing stuff like this,
we have seen again and again that AI can bypass
guardrails even if the person on the other end of
the conversation isn't trying to force things that way, and

(12:35):
the story really points out that for some jobs you
really probably should just depend upon human beings to do
the work. Okay, we're going to take a quick break.
When we come back, we've got some more news items
to cover. Okay, we're back. Meta says it's going to

(13:02):
remove posts containing news content for users in the state
of California if California passes a law that would require
platforms like Facebook and Google to pay publishers if work
from those publishers show up on those platforms. We've actually
seen this issue crop up around the world. Notably, it

(13:23):
happened in Australia a couple of years ago, when Australia
passed a similar law, Facebook went dark in Australia temporarily.
Eventually the law took hold and things have kind of
entered into an equilibrium. Whether or not the law actually
addressed the issue that was of concern is another matter
that actually is a good subject for an episode, honestly.

(13:46):
But the idea is that publishers want compensation. They're arguing
that platforms like Google and Facebook are siphoning traffic away
from the actual news websites. That's where they monetize that traffic,
and these platforms are benefiting from the work being done
by journalists, but they're not compensating the news outlets in

(14:08):
the process. Meta's Andy Stone, who's a name that I
see pop up pretty much whenever the company wants to
dismiss regulations that would work against it, said the bill,
if signed into law, would amount to nothing more than
a slush fund that would benefit large media companies but
not help smaller California based publishers, which, depending on how

(14:30):
this bill is framed, that actually might be true, because
I've heard similar criticisms about the Australian law. Now I
don't know I haven't read the bill. If I had
read the bill, I probably wouldn't have a good grasp
on its limitations because law speak be scary you. It's
even more difficult to read than a really complex technical manual.
But yes, this is another battle we're starting to see unfold.

(14:55):
I don't see California backing down from this, and it
will be interesting to see where this goes. But honestly,
at the end of the day, I'm mostly concerned that
the law actually does what the law aims to do right.
It gets very frustrating when you hear about laws that
potentially could correct a situation, but because of how they

(15:17):
are written and how they're enacted, they failed to do
what the purpose at least claimed to be. Metta also
faces a fine levied by a court in Russia this week.
The charge is that the company failed to remove prohibited
content from WhatsApp, specifically about a drug called Lyrica, and

(15:38):
so the fine is three million rubles, which equates to
about thirty seven thousand dollars, and a Russian court also
find the Wikimedia Foundation a similar amount of money, also
three million rubles, but they said that Wikimedia Foundation failed
to remove quote unquote false information about Russia's war wind Ukraine.

(16:01):
Something tells me that neither Meta nor Wikimedia Foundation will
consider these moves particularly intimidating. Meta certainly not thirty seven
thousand dollars is like, I don't even think they would
notice if that money went away, So I don't think
that this is really a big move against the companies.
Amazon also has a couple of bills to settle this week.

(16:23):
First up, the company agreed to pay a thirty eight
million dollars settlement in relation to a lawsuit that accused
the company of having illegally collected and storing information relating
to children through the Digital Personal Assistant, whose name starts
with A, ends with A, and has lex in the middle.
The Federal Trade Commission in the United States brought the

(16:45):
case against Amazon, so this settlement requires that the company
changes its data collection, storage, and deletion practices on top
of paying the fine. The other bill Amazon has to
pay is five point eight million dollars. This is another settlement,
also with the FTC. This one is with regard to
Amazon's ring products. Those are the security systems and doorbell

(17:08):
camera systems. The FTC accused Amazon of having a system
that allowed employees and contractors to access video feeds from
customer cameras without any real safeguards to prevent that from happening.
As you might imagine that as a huge privacy and
security violation. Amazon has agreed to create new processes with

(17:29):
regular checkups to make sure that the company has a
tighter data security strategy, while also simultaneously saying we never
broke the law, because that's what you can do when
you make settlements, all right. One trend happening with some
tech platforms is to make changes to how these platforms
give access to an API, which is an application programming interface.

(17:49):
That's what lets app developers tap into larger platforms in
order to do whatever it is. So a developer might
create an app that ties into a larger platform like
Twitter or as we'll talk about in a second Reddit,
and these apps then reference they send requests to the
underlying platform, and that populates the app. So Twitter as

(18:13):
a price tag associated with this. For every fifty million
tweets requested, a developer has to cough up forty two
thousand dollars. And yes, fifty million tweets, that's a lot.
But if your app is super popular and a lot
of people are using it all day. You're going to
hit that fifty million mark over and over again. Reddit
is now doing something similar. They've changed their API, and

(18:36):
Christian Sellig, the developer behind the popular Reddit app Apollo,
reports that he might have to shut the app down
entirely because of Reddit's new policy, which is to charge
twelve thousand dollars per fifty million requests. Selleg revealed that
Apollo generates around seven billion requests every month, and that
means it would cost around twenty million dollars a year

(18:58):
to operate. The app, understandably is not in a position
to pay that much. Generally speaking, the app developer community
is not too keen on this approach, as it punishes
you for being successful. Finally, if you have a PC
rig with a motherboard from Gigabyte, you should know that
security researchers at Eclipsium discovered the company had created a
backdoor system to deliver firmware updates to the motherboard, and

(19:22):
that system lacks proper security, which means a hacker could
potentially hijack that delivery system and use it to send
executable code straight to target computers. Do not pass goes,
do not Colick two hundred dollars. If you're curious, if
your device has a Gigabyte motherboard, you can go to
the start menu and Windows and look at system information.

(19:44):
More than two hundred and fifty motherboards are affected by this,
so that's a big ouch. Supposedly, Gigabyte is working on
this and intends to create a solution, but as of
right now, I don't know of any solution that has
actually out there, so be careful out there. All right,
that's it for the tech news for today, June first,

(20:06):
twenty twenty three. I hope you're all well, and I'll
talk to you again really soon. Tech Stuff is an
iHeartRadio production. For more podcasts from iHeartRadio, visit the iHeartRadio app,
Apple Podcasts, or wherever you listen to your favorite shows.

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.