Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:01):
Think your ChatGPTconversations are private.
Think again.
Is AI more capable ofunderstanding emotions than humans?
A Reddit user blames Microsoft forthe loss of 30 years of irreplaceable
data and Nobel Prize winner, GeoffreyHinton warns that humans may be made
(00:22):
obsolete faster than we have thought.
Welcome to hashtag Trending.
I'm your host, Dr. HammaMuhi, sitting in for Jim.
Love
the artificial intelligenceindustry faces a privacy reckoning.
After a federal judge rejected a usersbid to protect private conversations
(00:42):
from indefinite preservation ina high stakes copyright lawsuit.
Judge Ona Wong upheld her earlierorder to preserve all ChatGPT
conversations for evidence.
After rejecting a motion by chat,ChatGPT Aiden Hunt, the order stems
(01:02):
from the New York Times lawsuitagainst OpenAI and Microsoft over
alleged copyright infringement.
Hunt represents a growing concernamong ordinary users that their
private AI conversations couldbecome litigation evidence.
Hunt said he had no warning that thismight happen until he saw a report
(01:27):
about the order in an online forum andis now concerned that his conversations
with ChatGPT might be disseminated.
Including highly sensitive personaland commercial information.
This is not just aboutone individual's privacy.
OpenAI says the order forces itto retain even deleted chat, GBT
(01:50):
chats and API content that wouldtypically be automatically removed
from systems within 30 days.
This fundamentally breaks thecompany's privacy promises to users.
However, Judge Wong dismissedHunt's constitutional concerns
with pointed criticism.
(02:11):
She noted the petition does not explainhow a courts document retention order that
directs the preservation, segregation,and retention of certain privately held
data by a private company for the limitedpurposes of litigation is or could be
a nationwide mass surveillance program.
(02:34):
It is not, but industryexpertss are worried.
Privacy advocates warn the companiesmight adopt comprehensive data
retention policies just to avoidlitigation risk, even though no
court has ruled this to be legallyrequired, and the privacy implications
(02:55):
haven't been fully considered.
OpenAI CEO.
Sam Altman said, we will fight any demandthat compromises our users' privacy.
This is a core principle.
Referring to the New York Times demand, hesaid, we think this was an inappropriate
(03:16):
request that sets a bad precedent.
The broader battle continues as OpenAIappeals the preservation order, while
fighting the underlying copyright casethat could reshape how AI companies handle
both training data and user privacy.
(03:37):
Artificial intelligence systems nowscore higher than humans on emotional
intelligence tests, but expertswarn the headline, grabbing results.
Don't tell the whole story.
Researchers from the Universityof Geneva and University of Burn
tested six leading AI models,including chat, BT four on standard
(03:59):
emotional intelligence assessments,and the results were astonishing.
AI achieved 81% accuracycompared to just 56% for humans.
The tests presented emotionally chargedscenarios with multiple choice answers.
For example, one of Michael'scolleagues has stolen his idea and
(04:23):
is being unfairly congratulated.
What would be Michael'smost effective reaction?
AI consistently chose responses thathuman experts deemed most appropriate.
But experts question, whatdoes this actually measure?
It's worth noting that humans don'talways agree on what someone else is
(04:47):
feeling, and even psychologists caninterpret emotional signals differently.
Said finance expert od Al.
So beating a human on a testlike this doesn't necessarily
mean the AI has deeper insight.
it means it gave the statisticallyexpected answer more often.
(05:09):
The fundamental issue is thesewere multiple choice tests, far
from real emotional situations.
AI systems are excellent at patternrecognition, said clin script, CEO,
Noman J, but equating that to a deeperunderstanding of human emotion risks
(05:31):
overstating what AI is actually doing.
Real world applicationsshow mixed results.
Alton, an AI used by over6,000 truck drivers in Brazil.
Identify stress andsadness with 80% accuracy.
But when the cultural context or thelighting changes in an emotion recognition
(05:55):
test, the AI accuracy drops off a cliffNotes digital expert, Jason Hennessy.
Does it show LLMs are useful forcategorizing common emotional reactions?
Said consultant Wyatt Mayhem.
Sure, but it's like saying someone's agreat therapist because they scored well
(06:16):
on an emotionally themed buzzfeed quiz.
A Reddit user's digitallife disappeared overnight.
When Microsoft suddenly locked theirOneDrive account without warning, trapping
three decades of irreplaceable photos andwork files with no way to get them back.
(06:40):
The user explained I was consolidatingdata from multiple old drives before a
major move drives I had to discard dueto space and relocation constraints.
The plan was simple.
Upload to OneDrive, thentransfer to a new drive.
Later, after uploading the data,Microsoft abruptly suspended
(07:03):
the account without explanation.
The frustrated user wrote, Microsoftsuspended my account without warning
reason or any legitimate recourse.
I've submitted thecompliance form 18 times.
and each time I get an automatedresponse that leads nowhere.
(07:24):
No human contact, no actual help.
The user called Microsoft a Kafkaesqueblack hole of corporate negligence
and questioned whether the situationviolates consumer protection laws.
You just can't hold someone'sentire digital life hostage.
(07:44):
With no due process, nowarning and no accountability.
Microsoft's terms warn that contentand data associated with a locked
account will be deleted unlesslegally required to retain it.
Experts suggest automated systemslikely flagged potentially copyrighted
(08:06):
content triggering the suspension,but this story has critical lessons.
First, never trust any cloud providerwith all your data accounts can be
suspended, hacked, or encrypted Byransomware, you must have at least
one, preferably two alternate backups.
(08:27):
At least one should be air gapped andright protected and test these regularly.
A backup is only goodif it can be restored.
Microsoft aggressively pushes OneDriveintegration in Windows 11, presenting
itself as safe storage, but they haveobligations to truly educate customers and
(08:51):
provide meaningful recovery assistance,whether it's Microsoft or any other
company, the harsh reality remains.
When cloud services fail, decadesof memories can vanish instantly
with no human to call for help.
(09:12):
predicted, it would manifestthrough the technology developing
schemes and capabilities.
You would never have thought of them.
Most concerningly, if you had anargument with them about anything,
they would win the argument.
Beyond winning debates, Hinton outlinedsix major AI risks, unemployment bias
(09:36):
and discrimination, echo chambers,fake news, autonomous weapons, and
existential threats to humanity itself.
His former student, Nick Frost, co-founderof AI Company Cohere, offered a more
optimistic view during their joint panel.
While both agree AI will disruptjobs, frost believes large language
(10:01):
models will automate only 20 to30% of computer-based work, whereas
Hinton worries about 80% automation.
Frost said, I think there's a limit,and I think there's lots of work that
we do now as people in our jobs andin our homes and in our personal lives
that an LLM will not be able to do.
(10:23):
Hinton called for massive publicpressure to create AI regulations
comparing the situation to climatechange action requiring citizen
advocacy against resistant industries.
That's our show.
Let us know what youthink@technewsday.com.
Just use the contact asbutton on the website.
(10:46):
I'm your host, Dr. Hema Mutti.
Jim is on vacation next week,so I will see you next Friday.