All Episodes

May 31, 2024 22 mins

Reuters reports that TikTok is developing a US-only version of its recommendation algorithm, but the company disputes the report's accuracy. Plus, could AI make your next favorite TV show?

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tech Stuff, a production from iHeartRadio. Hey there,
and welcome to tech Stuff. I'm your host Jonathan Strickland.
I'm an executive producer with iHeart Podcasts and how the
tech are you. It's time for the tech News Inning
on Friday May thirty first, twenty twenty four, and let's

(00:25):
start off with some news about TikTok. The Independent reports
that part of TikTok's Project Texas plan. Project Texas was
the company's attempt to reassure American politicians that TikTok isn't
a data funnel that leads directly to China, was to
give the US government a remarkable amount of control and oversight. So,

(00:47):
for one thing, it would have let federal officials elect
some of the board members for TikTok. The government would
be given access to TikTok's source code to look for
evidence of backdoor access and that sort of thing. And
apparently there even would have been a kill switch feature
built into TikTok should someone determine that it was serving

(01:09):
as some sort of insidious tool belonging to a foreign adversary.
But the White House rejected this plan and said it
would not be sufficient to address national security concerns. Now,
I have only read the reporting around this plan. I
have not actually read the full details of the plan itself,
so I don't really have any more insight into this.

(01:30):
But we all know what actually happened instead of it right.
Congress passed a law that, if it holds up to
TikTok's legal challenges, will ultimately force TikTok to either separate
entirely from its Chinese parent company, byte Dance, or face
a nationwide ban in the United States. This brings me
to the next TikTok story. Reuter's reports that TikTok has

(01:54):
secretly been working on creating a recommendation algorithm that would
be completely independent from the one that Byteedance uses for
the sister app du Jin, which is the you know,
the Chinese variant of TikTok, or you could argue TikTok's
the American variant of Douyin. The implication is that this

(02:15):
is a potential preparation in the event that TikTok is
forced to separate from the mothership. Reuter's sites unnamed sources
who say the project is massive and could take a
year or so to complete. These sources claim that TikTok
executives have talked about the project in all hands meetings
and such, but TikTok representatives have disputed Reuter's report and

(02:39):
said it was quote misleading and factually inaccurate in the quote,
and that the divestiture from Byteedance is quote simply not possible.
End the quote where the truth lies. I don't know.
I have no doubt that divesting TikTok would be really challenging.
I do not think it would be impossible. It might

(03:00):
be very difficult, and it might also mean that the
effort to do so would cost so much in resources
that economically it's not viable. So maybe you know, from
an economic standpoint, you could say, yeah, it's not possible,
But I don't think that it's technologically impossible. And it
is also within the realm of possibility that TikTok representatives

(03:21):
are just saying this, because if they were to admit
otherwise that the company is working on this independent algorithm,
that could potentially give the US government a bit more
leverage to say, well, you're already making preparations, so there's
no problem here. But as I mentioned earlier, TikTok is
already suing to challenge this law, and this matter is

(03:42):
far from settled. So my guess is the US recommendation algorithm,
if in fact that really does exist and Royer sources
are truthful, I think that that's the contingency plan. I
think that's TikTok having a worst case scenario for the company,
the worst case being that it actually is forced to
divest itself or for byte Dance rather to divest itself

(04:06):
of TikTok. A security firm called black Lotus Labs has
a report that might explain a massive technological failing that
happened last year. So back in October of twenty twenty three,
more than half a million customers of the internet service
provider Windstream lost service. So what was the problem. Well,
the customers found that their routers had become bricked, though

(04:29):
they didn't actually necessarily know that's what had happened. Some
of them did, but most people were probably just thinking,
my Internet don't work no more. But it meant that
like six hundred thousand customers or so were without Internet
service and that's not good. And eventually Windstream would send
out replacement routers once it finally kind of got to

(04:49):
the conclusion that, yeah, it's the routers that failed, and
there was nothing on the fault of the customers As
far as anyone can tell. These failures happened over the
course of three days. So what can make so many
routers fail in such a short time. Well, according to
Black Lotus Labs, it was malware. Now I should add that,
as Dan Gooden of Ours Technica reports, Black Lotus Labs

(05:13):
did not specifically name Windstream in their report. Instead, the
firms spelled out the parameters of this malware attack and
the effects it had, and Goodin makes the case that
it's a pretty darn good match for what seems to
have happened over at Windstream, and I would agree with that. Now,
I would argue that the most concerning elements of this

(05:34):
report are the things we still do not know. We
do not know who carried out the attack. We do
not know why they did this attack. We also don't
know how the attacker was able to get initial access
to these routers in the first place. Like they used
a specific kind of malware in order to overwrite the
firmware on the router. We know that, like we know

(05:55):
what kind of malware they used, but how they got
that entry point in to the routers in the first place.
That is still an unknown variable. It could be that
there's a vulnerability that's in these routers that the cybersecurity
community doesn't know about yet but the attacker does, or
it could be something else. We also don't know if

(06:15):
the attack was you know, backed by a nation state.
Was this a state back hacking attack? No clue, And
because we know so little about the actual attack, like
who carried it out and how it was done, we
don't have a lot of good advice on how to

(06:35):
avoid this kind of thing in the future and how
to protect ourselves against future attacks, apart from just you know,
the general words of wisdom, like you know, when you
get a router or a modem or whatever, change the
default password, change it to something that only you know
and it's a strong password, or you know, reboot your
router on occasion in order to try and protect yourself

(06:58):
against attacks, that kind of thing, which I mean, those
are good rules to follow, but it's not very reassuring,
like it's not specific to this particular case. Blake Montgomery
reports in the Guardian the US authorities shut down a
botnet and not just any botnet, but the world's largest

(07:18):
buttonnet ever. So a botnet, for those of you unfamiliar
with the term, is a network of comprised computers. I mean,
the name kind of gives it away, a network of bots.
So typically a hacker uses malware or phishing attacks in
order to establish some kind of backdoor access to a

(07:40):
network of computers. So these are computers belonging to people
and companies and such and organizations that the hacker then
is able to at least take some partial control over.
And then typically the hacker puts these computers to work
to do something. This can include anything from using this
network of zombie computers zombie computers. That's another kind of

(08:04):
term for a botnet, a zombie army. You might use
those to blast some web server with Internet traffic in
an attempt to overwhelm it. Now that's a distributed denial
of service attack. Or you might put this network of
computers to work in the cryptocurrency minds. But in this

(08:25):
particular case, this compromised network was used to do several
different things, but the big one was an alleged COVID
insurance fraud scam that amounted to around six billion dollars
in fraud. The takedown operation, which was code named Endgame
because I guess cybersecurity folks like to feel cool and

(08:46):
presumably really enjoy the Avengers movies. It relied upon joint
cooperation of authorities in the United States, the United Kingdom, Ukraine,
the Netherlands, Denmark, Germany, and France. The US Department of
Justice arrested Yunhi Wang, who is a Chinese national, and
accused Wang of essentially spearheading the botnet operations. Wang did

(09:10):
not do it on his own, but was allegedly a
large part of this, and if Wang in fact has
found guilty, he could face up to sixty five years
in prison. He's thirty five years old now, so that's
a big old wolf. The United States National Security Agency,
or NSA, says it's a good idea for smartphone owners

(09:32):
to completely power it down their devices at least once
a week. The agency says that doing this can help
mitigate issues like spearfishing, but it's not a guarantee that
you'll be free and clear of all risks. It just helps.
So essentially, just the turning your device off and on
again on a regular basis should be considered a best practice,

(09:54):
and the NSSA should know because they are experts at
spying on people. If you'd like more information on that,
look up stories about prism or main way that kind
of thing. But to be less cheeky, I agree that
regularly doing a full power down and then power up
of your device is probably a good idea for lots
of different reasons, not least of which is that it

(10:14):
could provide an extra bit of security against threats. All right,
we've got a lot more tech news stories to cover,
but before we get to that, let's take a quick
break to thank our sponsors. We're back and now for

(10:37):
a couple of stories about using technology to spread propaganda
and misinformation. First up, META says that it identified and
subsequently removed six influenced campaigns. This is coming from an
article I saw on the Verge by Nick Barclay. Meta
says that a couple of these campaigns were using AI
in an attempt to push certain political viewpoints and to

(10:57):
make it seem as if that particular point view had
a larger amount of support than it really did. Meta
disclosed that the campaigns originated out of places such as Croatia, China, Bangladesh,
Israel and Iran. Apparently, the Israeli campaign made use of
AI to create comments to try and boost engagement and
the spread of messaging, and the Chinese campaign allegedly used

(11:21):
AI to generate images as part of that campaign. Now,
according to Meta, these attempts weren't particularly sophisticated or hard
to identify. But obviously folks expect that AI will get
better at creating this kind of stuff that people will
not be as readily available to detect. It'll be easier
for the stuff to kind of pass casual glance and

(11:45):
considering discourse on some social platforms, I expect it is
not going to take a whole lot of work to
craft something that fits right in, because goodness knows, I've
seen some garbage on social networks. On a similar note,
the New York Times reports that open Aye has said
it identified five online campaigns that were making use of
AI to boost messaging. These campaigns originated in places like Russia, China, Iran,

(12:10):
and Israel. There's no word on whether or not any
of these are the same ones that Meta mentioned. This week,
The Register had a rather snarky article about this that
talks about how these campaigns were relatively low stakes because
they hadn't seen much penetration. They were largely you know,
unseen by actual human people. Instead, they mostly consisted mainly

(12:30):
of bots posting stuff that other bots had created, or
maybe the same stuff that those same bots had created.
At any rate, it sounds like the actual impact of
these campaigns was minimal. And again some of that has
to do with the fact that the efforts of using
AI are not terribly sophisticated yet. But I do stress
the word yet, because there's every reason to expect these

(12:52):
attacks will get more sophisticated over time, and the real
concern is whether open AI will be as effective at
detecting and disrupting such campaigns when they inevitably surface. And
now in the AI is Coming for Creatives category, I
submit for your approval a story written by Winston Show
for The Hollywood Reporter about a company called Fable Studio.

(13:12):
This company is launching an AI powered platform called Showrunner,
which the studio claims will be able to create AI
generated television series. So it sounds to me like the
idea is you give AI some guidelines on what you want,
and then the AI creates an animated episode and voice
acted episode that consists of scenes that are based off

(13:35):
your prompts. So imagine that you're sitting there and you're thinking, man,
I really wish they hadn't canceled Firefly. And then imagine
you're thinking, hey, wait a minute, I can create new
episodes of Firefly using this tool and wash lives in
my version. Spoiler alert if you haven't seen Serenity. So
the actors might not look quite right, because again, the

(13:55):
tool can only make animated characters at the moment, it
can't do video AI generation. They might not sound right.
And sure, it probably won't come across like a real
Firefly episode and sound like something that Joss Whedon wrote,
but you could technically do it. If you're also thinking, hey,
this kind of sounds like the sort of stuff that

(14:17):
the Writers Guild of America and the Screen Actors Guild
were really worried about, you would be right on the money.
Fable Studio is launching a closed beta test of the
platform in the near future that will likely last the
rest of this year before it is able to launch
the service for reals's I will not be joining the
waitlist for this test. I have serious ethical objections to

(14:38):
AI generated entertainment, and they are far too numerous to
get into here. Now. I say that, but our next
story actually goes into one of the big reasons why so.
Jesus Diaz, a fast company, reports that Instagram is training
AI models using user data on the platform, and worse,
most users have no way to opt out of it.

(15:02):
So if you're an artist of any type and you
use Instagram to showcase your work, whether that's dance or
visual arts or photography, whatever it might be, your work
is being used to train up Meta's AI generative models.
The only people who even have the option to opt
out of this are citizens of the European Union, where

(15:22):
the rules of General Data Protection Regulation or GDPR provide
some protection. But as Das reports, Meta has taken some
rather extraordinary steps to obfuscate the option to opt out.
First up is the initial message alerting users in the
EU to the practice. In the first place, there's this
big old blue clothes button, and if you hit clothes,

(15:43):
essentially that serves as a I'm cool with this, you know,
it's essentially sending the opt in message, so your opt
out option is gone. Within the message itself is a
phrase that says quote this means you have the right
to object to how your information is used for these
purposes end quote, and the right to object phrase within

(16:06):
that is a link to the actual opt out feature. Now,
as you might imagine, this is much smaller than the
blue close button. Now, if you did click the right
to object phrase, it takes you to a rather intimidating
looking form that I would argue appears to be designed
to discourage users from taking the time to opt out.

(16:28):
That is my opinion. I am just saying. My opinion
is this was a calculated move to discourage people from
opting out. And what's more, Das rightly points out that
GDPR makes it illegal for Meta to deny anyone their
request to opt out of data capture and usage practices.
You don't have to give a reason, you don't have

(16:49):
to justify it. You just have to say I opt
out and that's it. So Meta has made this more
complicated than it needs to be. However, the form makes
it seem like you have to make a case to
opt out and then Meta has the right to deny
your request. They do not have that right. So if
you do live in the EU and you want to
opt out of this and you see that message pop up.

(17:11):
I suggest that when Meta asks you to explain why
you want to opt out, you write in something like
GDPR says, I don't have to give you a reason,
you jark face, or something to that effect. I feel
like Meta is really playing it fast and loose in
the EU with this approach, as I believe certain regulators,
and I'm thinking specifically of ones who happen to live
in Ireland might argue that the UX design that Meta

(17:33):
has employed is purposefully attempting to trick users into opting
in without necessarily wanting to, And if I had to
lay money on it, I would say that they're going
to face some lawsuits about this in the future. The
US Federal Aviation Administration, or FAA, has given Amazon the
clearance to operate delivery drones outside of the direct view

(17:55):
of a ground spotter. So previously, the FAA required Amazon
to employ ground spotters to make sure that drones weren't
putting people in property at risk while zooming around delivering
you know, socks and Taylor Swift albums and that kind
of thing. Without that requirement, Amazon will now have the
chance to expand operations beyond a few test markets and
potentially make it a viable means of delivering packages to

(18:19):
more customers. Now, this doesn't necessarily mean the air is
soon going to be buzzing with drones in the near future,
because the company has made some staffing cuts to the
Prime Air division in the recent past, and Amazon announced
just a few weeks ago it would be ending drone
operations in California entirely. So it might be a while
before you start seeing these suckers dropping off impulse purchases

(18:40):
in your neck of the woods, But a major regulatory
hurdle is now out of the way. Today mark's the
last day of employment for Twitch's current Safety Advisory Council.
This group of nine folks, which included industry experts and streamers,
were responsible for advising Twitch on how to improve safety
measures on the plot and to build trust among the

(19:01):
community of creators and users alike. They were alerted at
the beginning of this month that their services would no
longer be required at the end of May. Instead, Twitch
plans to create a new group consisting solely of Twitch ambassadors.
As Hayden Field of CNBC puts it, the language around
this decision is aligned with the general corporate speak that

(19:22):
typically boils down to we're cutting costs and safety is
a real hassle. That's me paraphrasing. By the way, Field
is far more professional and responsible than i am. Field
also points out that in twenty twenty three, Twitch sacked
around fifty folks in their trust and safety team, so
this move seems to be in alignment with that one
from last year. Considering the numerous stories that have come

(19:44):
out around how important security is and how risks and
threats are growing each year, partly due to the use
of AI, this to me seems like a short term
decision that could potentially have disastrous long term consequences. But
then Twitch has also made several policy chain over the
last couple of years that have really blown up in
the company's proverbial face, so maybe this is just the

(20:05):
platform saying the council hasn't been a good fit. In gadgets,
Will Shanklin reports that Spotify, after some resistance, has agreed
to issue refunds to folks who purchased a car thing.
That's the actual name for the product, the car Thing.
Spotify launched this a couple of years ago. It's a
device that attaches to your car's entertainment system, and it
provides streaming media from Spotify to your vehicle. But the

(20:28):
company announced last week that it was going to end
support for the devices on December ninth of this year,
at which point all those car things will become useless
things because they'll be bricked. Those puppies cost ninety bucks
a pop, and since the service is only a couple
of years old, that cheesed a lot of people off. Reportedly,
Spotify wasn't going to offer refunds at first, but the

(20:50):
company subsequently didn't about face, and did so just before
a class action lawsuit rolled in against them that was
accusing them of unfair business practices. So whether the change
of heart was in anticipation of that lawsuit or Spotify
just arrived at the conclusion that maybe it was a
bad idea to ignore customer complaints independently, I don't know.

(21:11):
But if you bought one, you can reach out the
customer service for a refund. You do have to provide
proof of purchase, however, And finally, it's the end of
an era. ICQ. The Venerable Instant Messenger service will shuffle
off this mortal coil. On my birthday, which doesn't mean
anything to you. But on June twenty sixth, the Russian

(21:31):
company VK, which is where ICQ ultimately landed, is going
to shut down the service, so anyone still using ICQ
will have to shift to some other instant messenger client. Now,
if you've never used the service, it was a bit peculiar.
You didn't get to choose a handle or use your
name or anything like that. Instead, the service would assign
you a number, you know, nice and personal like, and

(21:54):
it worked kind of like a phone number, and you
could initiate chat sessions with other users. And I used
it a lot back in my younger days, though honestly,
I can't remember the last time I popped on. It's
likely been at least two decades or more at this point. Honestly,
if you had told me that ICQ would outlive Aol
instant Messenger which shut down at the end of twenty seventeen,

(22:15):
I would have thought you were bunkers. But that's how
it turned out. Well, you had a good run ICQ.
I'll give you one final uh oh. That's the sound
that would play when you got a message on ICQ.
That's it for this episode, and the news for the
week ending on May thirty first, twenty twenty four. I
hope that you are all well, and I'll talk to
you again really soon. Tech Stuff is an iHeartRadio production.

(22:44):
For more podcasts from iHeartRadio, visit the iHeartRadio app, Apple Podcasts,
or wherever you listen to your favorite shows.

TechStuff News

Advertise With Us

Follow Us On

Host

Jonathan Strickland

Jonathan Strickland

Show Links

AboutStoreRSS

Popular Podcasts

Let's Be Clear with Shannen Doherty

Let's Be Clear with Shannen Doherty

Let’s Be Clear… a new podcast from Shannen Doherty. The actress will open up like never before in a live memoir. She will cover everything from her TV and film credits, to her Stage IV cancer battle, friendships, divorces and more. She will share her own personal stories, how she manages the lows all while celebrating the highs, and her hopes and dreams for the future. As Shannen says, it doesn’t matter how many times you fall, it’s about how you get back up. So, LET’S BE CLEAR… this is the truth and nothing but. Join Shannen Doherty each week. Let’s Be Clear, an iHeartRadio podcast.

The Dan Bongino Show

The Dan Bongino Show

He’s a former Secret Service Agent, former NYPD officer, and New York Times best-selling author. Join Dan Bongino each weekday as he tackles the hottest political issues, debunking both liberal and Republican establishment rhetoric.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.