Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tech Stuff, a production from iHeartRadio. Hey there,
and welcome to tech Stuff. I'm your host, Jonathan Strickland.
I'm an executive producer with iHeart Podcasts and How the
tech are you. It's time for the tech news for
the week ending on August sixteenth, twenty twenty four. So
(00:26):
for this first news item, I wanted to bring a
guest onto the show to talk about it in further detail,
joining tech Stuff to give us her expertise is friend
of the show, Shannon Morse. She's a hacker, she's a YouTuber,
she's a ren Fair enthusiast, and I'm proud to call
(00:49):
her a friend. Shannon, Welcome back to tech Stuff.
Speaker 2 (00:53):
Hi, thank you so much for having me. You kind
of surprised me when you said the ren Fair enthusiast. Yes,
I am well.
Speaker 1 (00:59):
To be fair, so am I I worked ren Fair
from nineteen ninety nine to twenty nineteen, so so cool.
We're not alone in this. But I've brought you on
not to chat about Renfair, although that would be a
heck of a podcast, but to talk about next time.
Next time we'll talk about that, talk about the tech
of Renaissance era Europe. I thought we would talk about
(01:20):
this hacking attack that has exposed billions of records for people,
not just in the United States, but in Canada and
other places as well. I wanted to hear from you
sort of what happened, who was involved, and what are
the implications of this attack.
Speaker 2 (01:38):
Well, let's go ahead and start with what exactly happened.
So way back in April of twenty twenty four, so
this year, there was this Twitter account that found a
leak online and they were like, oh, this seems like
it came from a cybercrime constituent that is called USDOD
(01:58):
and nobody really knew what was going on. It wasn't
until July twenty first, twenty twenty four, that about four
terabytes of data was released on breach forums. And breach
forums is basically this cybercrime community on the dark Web
where they often leak private and personal information of regular people.
This happens really often, and in this case they figured
(02:21):
out and they claim that it was from a data
broker called Nationalpublic Data dot com and this was later confirmed.
So around mid August, just gosh, yesterday, people started confirming
that this was from the data broker Nationalpublic Data dot com.
The amount of data did amount to about two point
(02:43):
nine billion records. The unfortunate part is a lot of
journalistic media outlets are saying that it was the data
of almost every American or two point nine billion people
had their data leak, and that's not necessarily the case,
so that was a little misconstruction. It's actually records, and
there could be records on decease people, there could be
(03:05):
multiple records on one person, there could be records on businesses,
and all of these seem to be the case when
it comes to these two point nine billion records. But
this does include everything from names and addresses to in
some cases, social security numbers and some other personal information.
So it's a pretty serious issue.
Speaker 1 (03:25):
Yeah, when it comes to if you're just talking about
data breaches that affect or potentially could affect the average person,
this one seems pretty bad. From why do I understand?
The data broker in question typically would do things like
background checks for various companies.
Speaker 2 (03:42):
Yes, and that's kind of unfortunate because a lot of
businesses probably used this company for background checks, and there's
no real way of knowing, like when you first got
employed with a company or if you signed up for
a credit new credit card if they're using this business
for background checks, So you really have no idea which
data brokers have your data. And if National Public Data
(04:04):
did have your data, well.
Speaker 1 (04:06):
Clearly the data was unencrypted, because otherwise it would be
very difficult to make any use of this information. I
don't want to just make an assumption or have my
bias informed this, so I thought I would ask you directly, Shannon,
in your opinion, is the storage of that sort of
data in an unencrypted form? Is that what you would
(04:27):
perhaps call bad security practices.
Speaker 2 (04:32):
I think we could both agree here, assuming that you're
of the same opinion as I. Absolutely having any of
that kind of data unencrypted or easily accessible on a
storage format is really really bad, especially when it comes
to a data broker. Unfortunately, data brokers are legal here
in the United States, and we really have no safety
(04:55):
proponents or no safety like regulations. Regulatory powers control what
data brokers can and cannot do with that data. So
they should have been taking care of that data in
an encrypted format, especially if they want to make money
off of it. Because now all this data has leaked,
Chances are just from a business standpoint, this data broker
will not make as much money using those background checks
(05:18):
because people aren't going to trust.
Speaker 1 (05:19):
Them as much. Now, sure, there's going to be class
action lawsuits and such directed against the company, I hope so. Yeah,
So I imagine that that will have an impact into their
revenue as well. Thank you for pointing out the lack
of regulation and safety net here in the United States.
Obviously in places like the EU, there's been a lot
of work done to try and try and get a
(05:40):
handle on that because it is tricky. I think here
in the US, all of our attention tends to go
to the end points, like the social platforms where people
are sharing info, but they ignore, like all the big
companies in the background that are literally buying and selling
that info all the time.
Speaker 2 (05:57):
I mean, we've seen this since the early Facebook days
with Cambridge Analytica, if you remember that big data leak,
Like it's very very similar scenario here where this background
company that nobody really knows exists, has so much data
and so many profiles, and they're building these profiles around us,
and we have no say in the matter. They just
(06:20):
do it without our knowledge, without our consent. And then
we hear about one of these breaches and we're like, well,
I've never done business with them, why do they have
my information? Why did this happen to me? So unfortunately
this is the case in the United States, and luckily
we do have some options in terms of what we
can do to protect ourselves. But it's not one hundred percent,
and I don't know if it ever will be.
Speaker 1 (06:41):
Yeah, Like, the suggestions I give to people include things like,
when it comes to accounts you already use, if you
haven't activated multi factor authentication, you should just do that
as a matter of practice on everything that offers it.
And if you're concerned about someone making use of say
your social Security number and your address, like they have
(07:02):
all the details they need for identity theft, you can
put a freeze on your credit with the three major
companies here in the United States. It's a pain in
the butt to do it. It's a pain in the
butt to thought if you freeze it, but it's less
of a pain of a butt than someone taking out
alone under your name. So, yeah, this is bad. It's
(07:24):
bad for all of us because there was nothing that
the average person could have done to protect themselves from
this particular attack.
Speaker 2 (07:31):
And even though your data may not have been involved
in this, and you can go to a website called
have I Been Poned that's owned with a p dot
com to see if your data was leaked in this
one as well as other previous data breaches. You can
see if your data was indeed leaked in any of those.
That can help you kind of get a mindset of
(07:53):
where is my data, who has my data currently? And
then you can also, you know, like you said, do
the credit freeze at Experian, Equifax, and TransUnion. That's extremely
important and one of the first steps you should take.
I recommend that to anybody who is curious about like
if their Social Security number has gotten out there, or
any kind of personally identifiable information that would allow for
(08:17):
any kind of identity theft. That's really important. And using
kind of proper security hygiene online can certainly help as well,
because you will run into phishing scams when people find
out your email address and your name. You might run
into spam texts or phishing texts or spammy calls. You
might run into the same thing with people sending you
(08:38):
spamming and phishing emails. So if you have these kind
of issues, or if you see them starting to rise
then looking into signing up for a password manager, which
can make your life a lot easier when you're auditing
how many accounts you have online and making sure that
you're changing those passwords because you can stay up to
date and just use like auto generation tools in password
(09:00):
managers to make really really good, strong protective passwords that
even you don't know, but the password manager does, so
it's fine. You won't lose your entry into your profiles.
Your password manager will help you using two factor authentication
to protect your accounts, especially if passwords get leaked, because
even if you are using a different password on every website,
(09:22):
if one of your profiles get leaked, that profile could
be hacked. So you might as well set up two
factor authentication and make sure none of those accounts get hacked.
And using proper security hygiene when it comes to public
networks like VPNs and making sure you're not logging into
accounts on public Wi Fi and using secure networks is
(09:42):
a really really good proponent. When it comes to data brokers,
it's extremely hard to manually opt out because there's hundreds
of them, including Nationalpublic Data dot Com, But you can
sign up for a data broker opt out tool like
delete me. That's the one that I use, and I
have paid for it as a customer long before they
(10:03):
knew about my YouTube channel. They will go in and
opt you out of having your data on the data brokers,
and they do it quarterly because data brokers often will
repurchase your data and put it back on their platforms,
so people can continuously go to these data broker websites
and research for your profile and find your address, your
(10:23):
phone number, your name, the names of your kids. Like,
it's pretty invasive what they can do. So I've been
paying for data broker opt out services for almost a
decade now and it's definitely helped with like clearing the
kind of data that's out there that I have no
control over and just kind of taking a step towards
(10:44):
my security.
Speaker 1 (10:45):
That's great advice, and obviously like these are steps that
hopefully one day will no longer be necessary, will have
the protective measures in place that make the mood man
I hope, So I do too. I got hope springs eternal, Shannon,
that's I gotta hope or else I wither away. Well, yes, Shannon,
thank you so much for joining the show and letting
(11:07):
us know more. About this hack and what it means
and what people can do. I think that helps alleviate
a lot of anxiety when people start to hear about
the steps they can take to best protect themselves. I
want to thank Shannon again, and we're gonna take a
quick break. When we come back, we have more news
to talk about. Mark Zuckerberg probably gets lots of requests
(11:41):
being the head of Facebook and all. This week, nineteen
of those requests came from members of the United States
Congress who would very much like to hear why Meta
has allowed ads for such spicy stuff as cocaine and
ecstasy to appear on Facebook and Instagram. I mean, these
substances aren't legal, so why is Meta allowing advertisers to
(12:03):
run ads for them on these very popular platforms. An
investigation from the Tech Transparency Project or TTP, prompted this inquiry.
In the investigation, the TTP found more than four hundred
examples of ads for drugs of varying degrees of legality
on these platforms, and lawmakers would like Zuckerberg to explain
(12:24):
how this could happen. In a letter to the CEO,
the legislators wrote, quote, this was not user generated content
on the dark Web or on private social media pages,
but rather they were advertisements approved and monetized by Meta.
Many of these ads contained blatant references to illegal drugs
in their titles, descriptions, photos, and advertiser account names, which
(12:48):
were easily found by the researchers and journalists at the
Wall Street Journal and Tech Transparency Project using Meta's ad library. However,
they appeared to have passed undetected or bid ignored by
Meta's own internal processes. End quote. Now, according to the TTP,
many of these ads prompted users to click over to
a Telegram account to complete any shopping that they wished
(13:12):
to do. Meta has until September sixth to respond. Now,
I can't say that I'm terribly surprised by this story. Personally,
I have encountered so many suspicious ads on Facebook that
I suspect the quality control aspect of the Ads division
is purposefully lax. Now, in my case, the ads weren't
(13:33):
for illegal drugs, but rather they were ads that way
that were fake. I mean they were posing as other companies,
like some other entity, in an attempt to fool consumers
into shopping for shoddy knockoffs. The big example I can
think of off top of my head was the sam
Ash music stores in the United States. They went out
(13:55):
of business and closed down, and I saw ad after
ad after ad pose that, but they weren't sam Ash.
They were some other fly by night company trying to
sell knockoffs or sometimes just boxes of literal trash rather
than whatever item you thought you were going to get. Now.
(14:15):
I have reported these ads to Facebook, and frequently I
received the message that after review, Facebook determined there were
no violations of its policies. Meanwhile, the actual legitimate companies
like sam Ash would post warnings on their pages about
scams like this. I suspect that unless lawmakers make it
really hurt to engage in irresponsible ad partnerships, we're not
(14:37):
going to see any significant change on this front. In
the world of tech and politics, the Trump campaign has
been hit with a spearfishing attack that compromised campaign assets.
The Trump campaign identified hackers backed by Iran as the
party responsible, and now Google's Threat Analysis Group or tag TAG,
(14:57):
has released a statement saying that yes, they have observed
that the hacker group APT forty two, which is linked
to Iran, targeted both the Trump campaign and the Harris campaign.
Kamala Harris's campaign, as well as President Biden's campaign when
he was actively campaigning. Tag also stated that subsequent attacks
(15:20):
have been unsuccessful, but there remains an ongoing attempt to
compromise accounts belonging to people who are close to Trump
and Harris slash Biden. The assumption moving forward is that
all elections will face potential external interference from threat actors,
both domestic and foreign. So, you know, that's fun. It's
a reminder that democracy isn't something that can just fend
(15:42):
for itself. Now, if you want to learn more about this,
I recommend Kevin Perdy's article on ours Tetnica. It is
titled Google's threat team confirms Iran targeting Trump, Biden, and
Harris campaigns. More than a decade ago, New Zealand police officers,
responding to a from the FBI rated the massive home
(16:03):
of Kim dot com. Dot Com formerly known as Schmidtz,
formed the company mega Upload, which allowed users to create
cloud based storage for all sorts of stuff and then
give other folks access to the files that were in
that storage. So obviously a lot of people use the
service to serve as kind of a trading ground for
pirated material music, TV shows, films, games, software, and more
(16:28):
all found a home on mega Upload, while the company
is responsible for making that stuff. Seethed they saw dot
Com profiting off of piracy, So the FBI called on
New Zealand police back in twenty twelve to give dot
Com a little visit, and they charged him with racketeering
and wirefraud and money laundering on top of all that
(16:48):
copyright infringement. And after more than a decade of legal wrangling,
the US has now secured extradition of dot Com to
fly to the US and stand trial. Now dot Com
has denounced the whole thing. He referred to New Zealand
as essentially a US colony. It's not, and dot Com
still maintains he will not leave New Zealand and that
(17:09):
his business was just to provide a service in which
people could upload and share files, and therefore he bears
no responsibility on the kinds of files that were uploaded
and shared. This is a basic principle of the safe
harbor defense. But dot Com has also been really vocal
and let's say abrasive, and I think that may have
(17:30):
hurt his case a little, or at least prodded authorities
to really go after him. Now, whether all of this
is going to lead to a trial and conviction or not,
or if dot Com will successfully appeal the extradition order,
we'll just have to wait and see. In the good
times are hard times department. Telecommunications company Cisco Systems posted
a ten point three billion with a B dollar profit
(17:53):
in its last fiscal year, but it's also laying off
folks seven percent of its workforce, which could be around
five five hundred people. All told, The company said that
the move is necessary so it can put full focus
on quote key growth opportunities and drive more efficiency in
our business end quote. As Stephen Counsel of SFGate reports,
(18:14):
this is the second time this year that Cisco has
held substantial layoffs. Golly. Back to politics, Donald Trump made
headlines after claiming that the Harris campaign or someone on
that campaign's behalf, had made use of AI to artificially
boost crowd sizes at Harris events. In various images of
those stops, and the images that Trump referenced all appear
(18:37):
to be legit. Footage from numerous media outlets show that
the crowds were actually quite large. However, that doesn't mean
that there are no AI generated or enhanced images making
the rounds out there. It's just the ones that Trump
was citing were not AI generated. And it also means
that people are calling into question the legitimacy of actual,
(18:58):
real images and visit. So on the one hand, we
do know there are AI generated images out there that
are competing for our attention and posing as legit, and
they are misinformation. On the other hand, we also know
that people will now question legitimate sources and argue that
they are in fact AI generated. So some refer to
this as the liar's dividend. There's a great piece about this.
(19:20):
It's again over at Ours Technicuts by Kyle Orland. It's
titled the Many Many Signs that Kamala Harris's rally crowds
aren't AI creations. We're going to take another quick break,
and when we come back, I've got some more news.
(19:43):
A story I missed a while back is that code
testers came across some interesting stuff while looking at Apple Intelligence,
which is Apple's AI project, and that interesting stuff appeared
to be prompts meant to guide generative AI to avoid
the pitfalls we've seen with other tools, stuff like ucinations
or confabulations, you know, the tendency for generative AI to
(20:04):
just make stuff up in the absence of real information.
And that's potentially good news, assuming that these guidelines actually work.
But I have another story this week that makes me worry.
It's not about Apple. It's about a research firm out
of Tokyo called Sakana AI. So the team had set
its AI system to do autonomous scientific research, and they
(20:25):
included a function that would essentially say times up on tasks.
And what was surprising was the system attempted to rewrite
its own code in order to give itself more time
to complete tasks. So instead of trying to do things faster,
they tried to change the deadline. Now, this is a
very low level instance of a type of situation that
has fueled countless science fiction cautionary tales. An AI system
(20:49):
ignores or alters rules in order to fulfill its function.
The cliche version of this is asking a global system
that's running on AI to create world peace, and so
the AI inevitably decides the only way to do that,
the only way to prevent conflict is to kill off
all those pesky humans who create it, meaning everybody. Of course,
the Sikana AI example is not dangerous like that, but
(21:12):
it does illustrate that AI experts have to be very
careful in how they design systems so that the systems
remain safe and reliable. It may not be enough to
create rules if the AI can figure oh, well, here's
the problem. These ding dang dearn rules need to go away.
And I'm being a little flippant, but to learn more
about this, I recommend BENJ. Edwards article research AI model
(21:33):
unexpectedly modified its own code to extend runtime and that
is on. You guessed it Ours Technica. I swear I
use a lot of other sources. It's just Ours Technica
really knocked it out of the park. This week, so
Ahmed has a piece on tech spot titled Recruiters Overwhelmed
as fifty seven percent of young applicants are using chat
GPT for job resumes. In this piece, Omed mentions how
(21:57):
hiring managers are flooded with more applications than ever before,
many of which have clearly been written in part or
entirely by AI. Further, the pieces written by the free
version of tools like chat GPT contain many telltale signs
of AI generation with less natural language and other quirks,
while the paid for versions of these tools tend to
(22:18):
blend a bit better with you stuff that was actually
written by real human beings. And this presents a challenge
to hiring managers who need to see what an applicant
is actually capable of. Now, that could just mean bringing
more people in for interviews, but those take up a
lot of time to schedule and actually do so it's
not terribly efficient. Moreover, this makes me concerned that folks
(22:38):
who have little to know access to AI tools, specifically
the types of tools that you have to pay for,
they're ultimately going to be at a disadvantage. Maybe it
will all shake out, but my fear is that people
who are already in a position to at least afford
an AI hype man are going to be the people
who get these interview slots, while actual human beings who
(22:58):
can't afford that type of SOLF report will be applying
to job after job with diminishing hope of landing an interview.
I am bumming myself out today. The SAG after a
union which represents actors, has agreed to conditions in which
companies can make use of AI duplicates of actor voices.
This is specifically for use in advertisements, and it leverages
(23:21):
a platform called Narrative, which doesn't have an E at
the end, and actors can license their voices for use
in commercials through Narrative. The agreement states that individual actors
will have full say on which brands they're willing to
work with and how much they can charge for use
of their voice. In addition, if an actor decides they
no longer want the robots to talk like them, they
(23:44):
can sever their relationship with Narrative, and the platform is
obligated to delete all their voice data, including the reference
recordings that were used to make the digital duplicate in
the first place. One thing that concerns me is the
possibility of a company making use of an AI duplicated
voice for ende dorsements. Here in the United States, we
have strict rules about endorsing products and services. The person
(24:05):
who's giving the endorsement is legally responsible for actually using
the thing in question and being honest in their take
on it. It's why I do very few endorsements, because
i have very high standards on this and I'm legally
obligated to do so. But with AI duplicates, a company
could try and do that without the actual original actor's input.
If the actor just says, yeah, you know, I'm cool
(24:26):
with whatever brand using my voice, but then the brand
ends up doing more than just an ad spot and
does an endorsement, well, that could bring legal issues into
play down the line. However, I have not read the
full agreement yet, so it may be that this is
already accounted for and the process could be really granular.
So maybe I'm complaining about something that isn't even an issue.
(24:47):
Just a quick up date on the ongoing issues with
the Boeing Starliner crew, who are currently aboard the International
Space Station. NASA has yet to decide how to proceed. Currently,
the next scheduled docking with the ISS is supposed to
happen on September twenty fourth with a SpaceX Dragon capsule,
but that obviously cannot happen with the star Liner in
(25:07):
the way. So the assumption that I'm mostly seeing online
is that NASA is going to opt to have the
Starliner return to Earth with no crew aboard the spacecraft,
and the two astronauts who flew on the star Liner
will have to wait a while before hitching a ride
back home. Aboard a SpaceX Dragon capsule. NASA has previously
indicated the agency would make a decision on how to
move forward this week, but since then representatives have said
(25:31):
the agency is going to take a bit more time
to make that call since there is still some time
to spare, but not a whole lot of it. It's
still possible we'll see the astronauts return in the star
liner itself. NASA is taking all factors into account and
wants to be certain that any such attempts are well
within acceptable risks. Before I sign off, I've got a
couple of article suggestions for y'all. First up as another
(25:52):
piece by BINGJ. Edwards. This one is titled deep Live
Cam Goes Viral allowing anyone to become a digital dopple
game and yes it's on Ours Technica. The article tells
about tools that let folks use a simple image to
create a kind of digital mask, sort of Mission Impossible
style only for like webcam live streams, you know, not
(26:13):
in the real world, but online. You can use a
picture like the one case they used a picture of
George Clooney and the guy ended up having a digital
George Clooney face and it was pretty impressive. It was
reactive in real time. Next is a piece from NPR's
Dara Kurr titled Meta shutters tool used to fight disinformation
(26:35):
despite outcry. Kerr writes about a tool called crowd Tangle,
which researchers use to track disinformation online and how Meta
is shutting this down despite the fact that here in
the United States we're in an election year and you
would think that tools meant to help track and detect
disinformation would be particularly useful. So that's what has critics
(26:55):
asking Meta to maybe keep it online a bit longer.
That's it. I want to thank Shannon Morse again for
jumping on the show. I always appreciate having her on.
She is a delight. You can see more of her
work over at YouTube, So go to YouTube and look
for Shannon Morse. Highly recommend her content. She's really a
(27:17):
lot of fun, incredibly knowledgeable, and is a great tech communicator.
So check out her work, and like I said, check
out our Stanka because they knocked it out of the
park this week. I hope all of you are doing
well and I'll talk to you again really soon. Tech
(27:38):
Stuff is an iHeartRadio production. For more podcasts from iHeartRadio,
visit the iHeartRadio app, Apple podcasts, or wherever you listen
to your favorite shows.