All Episodes

June 27, 2025 12 mins

In a high-stakes copyright lawsuit, Judge Ona Wong upheld an earlier order requiring the preservation of all ChatGPT conversations for evidence. This decision stems from a lawsuit by the New York Times against OpenAI and Microsoft over alleged copyright infringement. The ruling has sparked concerns among users like Aiden Hunt, who worry that their private and sensitive conversations with ChatGPT could become part of legal proceedings. Hunt, unaware of the potential for his chats to be used as evidence, fears for the dissemination of personal and commercial information. OpenAI argues that the order undermines its privacy promises, as it forces the company to retain even deleted chats, contradicting its policy of removing such content within 30 days. Despite these concerns, Judge Wong stood firm, dismissing Hunt's constitutional arguments.

00:00 Introduction to the High-Stakes Copyright Lawsuit
00:48 Judge's Order to Preserve ChatGPT Conversations
01:11 Concerns Over Privacy and Litigation Evidence
01:45 Impact on OpenAI's Privacy Policies
02:04 Judge's Dismissal of Constitutional Concerns

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:01):
Think your ChatGPTconversations are private.
Think again.
Is AI more capable ofunderstanding emotions than humans?
A Reddit user blames Microsoft forthe loss of 30 years of irreplaceable
data and Nobel Prize winner, GeoffreyHinton warns that humans may be made

(00:22):
obsolete faster than we have thought.
Welcome to hashtag Trending.
I'm your host, Dr. HammaMuhi, sitting in for Jim.
Love
the artificial intelligenceindustry faces a privacy reckoning.
After a federal judge rejected a usersbid to protect private conversations

(00:42):
from indefinite preservation ina high stakes copyright lawsuit.
Judge Ona Wong upheld her earlierorder to preserve all ChatGPT
conversations for evidence.
After rejecting a motion by chat,ChatGPT Aiden Hunt, the order stems

(01:02):
from the New York Times lawsuitagainst OpenAI and Microsoft over
alleged copyright infringement.
Hunt represents a growing concernamong ordinary users that their
private AI conversations couldbecome litigation evidence.
Hunt said he had no warning that thismight happen until he saw a report

(01:27):
about the order in an online forum andis now concerned that his conversations
with ChatGPT might be disseminated.
Including highly sensitive personaland commercial information.
This is not just aboutone individual's privacy.
OpenAI says the order forces itto retain even deleted chat, GBT

(01:50):
chats and API content that wouldtypically be automatically removed
from systems within 30 days.
This fundamentally breaks thecompany's privacy promises to users.
However, Judge Wong dismissedHunt's constitutional concerns
with pointed criticism.

(02:11):
She noted the petition does not explainhow a courts document retention order that
directs the preservation, segregation,and retention of certain privately held
data by a private company for the limitedpurposes of litigation is or could be
a nationwide mass surveillance program.

(02:34):
It is not, but industryexpertss are worried.
Privacy advocates warn the companiesmight adopt comprehensive data
retention policies just to avoidlitigation risk, even though no
court has ruled this to be legallyrequired, and the privacy implications

(02:55):
haven't been fully considered.
OpenAI CEO.
Sam Altman said, we will fight any demandthat compromises our users' privacy.
This is a core principle.
Referring to the New York Times demand, hesaid, we think this was an inappropriate

(03:16):
request that sets a bad precedent.
The broader battle continues as OpenAIappeals the preservation order, while
fighting the underlying copyright casethat could reshape how AI companies handle
both training data and user privacy.

(03:37):
Artificial intelligence systems nowscore higher than humans on emotional
intelligence tests, but expertswarn the headline, grabbing results.
Don't tell the whole story.
Researchers from the Universityof Geneva and University of Burn
tested six leading AI models,including chat, BT four on standard

(03:59):
emotional intelligence assessments,and the results were astonishing.
AI achieved 81% accuracycompared to just 56% for humans.
The tests presented emotionally chargedscenarios with multiple choice answers.
For example, one of Michael'scolleagues has stolen his idea and

(04:23):
is being unfairly congratulated.
What would be Michael'smost effective reaction?
AI consistently chose responses thathuman experts deemed most appropriate.
But experts question, whatdoes this actually measure?
It's worth noting that humans don'talways agree on what someone else is

(04:47):
feeling, and even psychologists caninterpret emotional signals differently.
Said finance expert od Al.
So beating a human on a testlike this doesn't necessarily
mean the AI has deeper insight.
it means it gave the statisticallyexpected answer more often.

(05:09):
The fundamental issue is thesewere multiple choice tests, far
from real emotional situations.
AI systems are excellent at patternrecognition, said clin script, CEO,
Noman J, but equating that to a deeperunderstanding of human emotion risks

(05:31):
overstating what AI is actually doing.
Real world applicationsshow mixed results.
Alton, an AI used by over6,000 truck drivers in Brazil.
Identify stress andsadness with 80% accuracy.
But when the cultural context or thelighting changes in an emotion recognition

(05:55):
test, the AI accuracy drops off a cliffNotes digital expert, Jason Hennessy.
Does it show LLMs are useful forcategorizing common emotional reactions?
Said consultant Wyatt Mayhem.
Sure, but it's like saying someone's agreat therapist because they scored well

(06:16):
on an emotionally themed buzzfeed quiz.
A Reddit user's digitallife disappeared overnight.
When Microsoft suddenly locked theirOneDrive account without warning, trapping
three decades of irreplaceable photos andwork files with no way to get them back.

(06:40):
The user explained I was consolidatingdata from multiple old drives before a
major move drives I had to discard dueto space and relocation constraints.
The plan was simple.
Upload to OneDrive, thentransfer to a new drive.
Later, after uploading the data,Microsoft abruptly suspended

(07:03):
the account without explanation.
The frustrated user wrote, Microsoftsuspended my account without warning
reason or any legitimate recourse.
I've submitted thecompliance form 18 times.
and each time I get an automatedresponse that leads nowhere.

(07:24):
No human contact, no actual help.
The user called Microsoft a Kafkaesqueblack hole of corporate negligence
and questioned whether the situationviolates consumer protection laws.
You just can't hold someone'sentire digital life hostage.

(07:44):
With no due process, nowarning and no accountability.
Microsoft's terms warn that contentand data associated with a locked
account will be deleted unlesslegally required to retain it.
Experts suggest automated systemslikely flagged potentially copyrighted

(08:06):
content triggering the suspension,but this story has critical lessons.
First, never trust any cloud providerwith all your data accounts can be
suspended, hacked, or encrypted Byransomware, you must have at least
one, preferably two alternate backups.

(08:27):
At least one should be air gapped andright protected and test these regularly.
A backup is only goodif it can be restored.
Microsoft aggressively pushes OneDriveintegration in Windows 11, presenting
itself as safe storage, but they haveobligations to truly educate customers and

(08:51):
provide meaningful recovery assistance,whether it's Microsoft or any other
company, the harsh reality remains.
When cloud services fail, decadesof memories can vanish instantly
with no human to call for help.

(09:12):
predicted, it would manifestthrough the technology developing
schemes and capabilities.
You would never have thought of them.
Most concerningly, if you had anargument with them about anything,
they would win the argument.
Beyond winning debates, Hinton outlinedsix major AI risks, unemployment bias

(09:36):
and discrimination, echo chambers,fake news, autonomous weapons, and
existential threats to humanity itself.
His former student, Nick Frost, co-founderof AI Company Cohere, offered a more
optimistic view during their joint panel.
While both agree AI will disruptjobs, frost believes large language

(10:01):
models will automate only 20 to30% of computer-based work, whereas
Hinton worries about 80% automation.
Frost said, I think there's a limit,and I think there's lots of work that
we do now as people in our jobs andin our homes and in our personal lives
that an LLM will not be able to do.

(10:23):
Hinton called for massive publicpressure to create AI regulations
comparing the situation to climatechange action requiring citizen
advocacy against resistant industries.
That's our show.
Let us know what youthink@technewsday.com.
Just use the contact asbutton on the website.

(10:46):
I'm your host, Dr. Hema Mutti.
Jim is on vacation next week,so I will see you next Friday.
Advertise With Us

Popular Podcasts

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.