Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tech Stuff, a production from iHeartRadio. Be there
and welcome to tech Stuff. I'm your host, Jonathan Strickland.
I'm an executive producer with iHeartRadio. And how the tech
are you? It's time for the tech news for Tuesday,
March twenty first, twenty twenty three. First up, Google's bug
(00:27):
hunting team found eighteen zero day vulnerabilities in Samsung hardware,
specifically the xin US chip sets. That's e x Y
n os SO. Zero day vulnerabilities, as the name suggests,
are vulnerabilities that are in various technologies from day one,
(00:50):
that have been there since the technology came out, and
potentially are actively being exploited by hackers. And the xosed
chip sets are arm based system on a chip chip sets,
and they serve as the brains of stuff like certain
(01:11):
smartphones and wearables and even some car systems. The team
has been uncovering these vulnerabilities since late twenty twenty two,
so it's not like this is brand new discoveries in
all cases, but it has been ongoing and at least
four of them are considered severe vulnerabilities, potentially allowing an
(01:31):
attacker to target any affected device as long as that
attacker knows the associated phone number with the XANOS modem
in that device. So if someone happened to own a
smartphone with one of these chips as its processor, an
attacker could potentially perform a remote code execution or RCE
(01:53):
on that phone just by knowing that phone number. So
you wouldn't have to even be tricked into doing anything, right.
A lot of vulnerabilities we hear about involve someone taking
an action that they shouldn't have. They get a prompt,
they respond to it, and that in turn creates the
opportunity for an attacker to compromise a device. In this case,
(02:15):
it literally just means that the attacker has to have
the phone number of the target device. This is a
massive security flaw obviously, because then the attacker could force
the victim's phone to run all sorts of different types
of code. They could have the phone execute code that
turns it into an espionage device similar to the Pegasus
(02:39):
stuff we talked about with iPhones a couple of years ago,
or maybe they could lock the phone in a sort
of ransomware attack. Now, the other fourteen flaws, because there
were eighteen total, four of them were super extreme. The
other fourteen are also bad news, but they don't pose
the same critical risk as the four that I'm alluding to. Now,
(03:01):
patches are going out in security updates, but patches are
things that different companies roll out at different scales and
at different timetables, so not all of the effective devices
are necessarily able to be patched right now. That's a
(03:23):
real serious issue that you know, companies like Google have
rolled out patches and security updates already, so as long
as you're up to day in your security patches, you
have presumably blocked off this potential attack vector. But the
proposed workaround for those who are still waiting to get
a patch is pretty extreme. It essentially involves shutting down
(03:46):
both the call over Wi Fi and voice over LTE
functions of your phone, So essentially it means removing the
phone from your phone, and I get it a lot
of y'all never use a phone as a phone anyway,
right Like a voice call is probably the last thing
you would use your phone for. But it does strike
me as particularly weird to remove the phone part from
(04:09):
a phone anyway. Some of the devices affected include the
Pixel six and Pixel seven from Google, a whole bunch
of different Samsung Mobile devices are affected, which isn't a surprise.
They include the S twenty two, the M twelve, the
M thirteen, the M thirty three, and a whole bunch
of devices that are in the A series of Samsung
(04:30):
Mobile devices, not all a series, but a lot of them.
Plus any vehicles that have an Xinos Auto T five
one two three chip set or any wearables that have
the Xinos W nine twenty chip set are affected by
this vulnerability, which is a big old yikes. By the way,
this is a good reminder to activate real security updates
(04:52):
when you get them, because security updates can address critical
vulnerabilities like this one and prevent you from the coming
a victim in the future. The only thing is that
you should verify that a security update you receive is
real first, so like do just bear minimum research to
make sure that yes, you know, whatever company has issued
(05:14):
a security update. So if you are using say a
Google Pixel device, just make sure that Google did in
fact issue a security update. Just do a quick search
to make sure that is a thing that has happened,
and then yeah, install that as soon as you can
because it can potentially cut off attack vectors. Sometimes these
(05:36):
updates can also affect other things on a device, which
is a pain in the butt, but it's still better
than having your phone compromised. So you know, just a
word out there. Google has recently banned a Chinese based
app from the Google Play Store. It is not TikTok,
(05:56):
though we will talk about the besieged video based social
work a little later in this episode. No, it's Pen
Duo Duo. Now. Essentially, this app matches consumers with merchants
for various products. It started out as an agricultural thing
and went on to stuff like groceries, and now it's
pretty much everything. So why is Google removing the app? Well,
(06:20):
according to Google, the most recent version of the app
contains malware of some sort, so Google suspended the app
out of a concern for security. Reuters reached out to
Penn Duo Duo and received a response that claims that
Google has alerted the company that the app has been suspended,
but that Google did not give any details as to
why the app was suspended anyway. This isn't the first
(06:43):
time that Penn Duo Duo has been in legal hot
water here in the United States. One of the major
issues that is often connected to it involves the counterfeiting
of various goods. So essentially, it's saying that pen Duo
Duo tends to match folks up with merchants who sell
knockoff versions of designer or luxury goods. That is a
(07:06):
big part of China's business as well, is just creating
counterfeit goods that mimic high end designer goods. It's a
big problem in the designer good industry. I can't pretend
like I fully understand it because I'm basic. I don't
do like the whole designer goods thing because I have
(07:27):
no class. Routers also reports that Google reps have claimed
that it was just an accident that certain communication records
that could have been used as evidence against it in
an upcoming trial in which the US Justice Department is
bringing antitrust charges against Google is scheduled to happen. So essentially,
the Department of Justice is saying, hey, there's all these
(07:50):
records that we potentially should have had access to, and
you're deleting them. It's essentially like your shredding documents because
you anticipate a raid. And Google is saying, no, we
didn't mean to delete anything. We were actually taking reasonable
steps to preserve everything, but some stuff slipped through the cracks,
not the heart of the lawsuit or claims from most
(08:12):
of the states here in the US that Google has
been using anti competitive strategies to prevent any other company
from getting a foothold in the search business. The d
o J suspects that Google has denied access to or
deleted documents that would lend credence to their case against
the company, and Google says, ah, that's ridiculous. We've already
(08:33):
shared millions of documents with you. Now. I don't have
eyes on this one. I have no idea if Google's
claims are with or without merit. I don't know if Google,
you know, intentionally deleted files that would have been damaging
to the company's case, or if this truly was just
(08:53):
something that happened without any malice involved. I will say
it's not a good look. I mean, anytime documents go missing,
that isn't exactly a ringing endorsement of your claims of
being innocent. But yeah, I don't know. Well way or
the other. Tech Crunch has a piece titled Facebook political
(09:14):
micro targeting at center of GDPR complaints in Germany. So
GDPR if you are not in the know. Is a
set of regulations that are centered around protecting EU citizen
personal information. And if you want to do business in
the EU, if you're a digital company, then you want
to have any sort of business practices within the European Union.
(09:38):
Complying with GDPR is a really big deal. In fact,
their entire consulting companies out there designed to help other
companies make sure that their business practices are in alignment
with GDPR, because if you don't comply with it, then
your company can get hit with massive fines or even
(09:58):
being denied access to doing business within the European Union,
which would be a huge deal. Right. So in this case,
what has happened is that Facebook has used microtargeting to
direct political advertisements to German citizens without first getting their
express consent to make such use of their personal data.
(10:21):
And the EU assigns different levels of importance to different
kinds of personal information, so things like your political leanings,
that's way way way up there. That's highly protected information.
So there is a high bar to clear if you
are going to make use of that personal information in
any way. And according to the charges meta Slash, Facebook
(10:44):
totally failed to do that like they should have gotten
the express consent of each and every person who's information
they used to sell microtargeting ads. You know that they did,
and they didn't do that. And arguably just as bad,
if not worse, is that every single political party in
(11:04):
Germany took advantage of meta's disregard for the GDPR regulations.
So whether it was a far right political party, far left,
and everything in between, all of them used microtargeting to
deliver advertising messages, including messages that contradicted each other from
(11:25):
the same party, just based on microtargeting of individuals. So,
for example, some one party might have sent out a
message to someone who is identified as being very pro
climate change regulations as saying, we're the party that is
pursuing climate protection over everything else, so believe in us.
(11:50):
But that same party might have identified someone else as
being more in tune with personal freedom over time regulation,
as saying, we believe in the climate just like you do,
but we also believe you should be able to travel
wherever you want, whenever you want, so we're about protecting
the climate without infringing on personal liberty. These two messages
(12:13):
cannot coexist from the same party, right, you cannot stay
true to both simultaneously. If you're telling one person we're
going to protect the climate at all costs, and you're
telling someone else will protect the climate, but we're not
gonna you know, we're not going to cramp your style.
Those are two messages that cannot be true at the
same time. And the point is that if the same
(12:36):
party is delivering these two contradictory messages, they are lying
to someone or they're lying to everyone, at least one person,
but possibly everybody, And ultimately it undermines the democratic process
because you cannot trust the messages coming out of a
political party. If those messages are contradictory of one another,
(12:58):
it destroys democracy itself. Fun times anyway, an advocacy group
in OYB has really cranked up the pressure here, and
I would argue for a very good reason, and that
in OYB is putting pressure on EU officials to enforce
the rules under GDPR like they've created these rules, and
(13:18):
in OYB is saying clearly these rules are not being followed,
so it's your job to get in here and to
actually enforce them and to get things like explicit consent
before any of this kind of data can be used
for microtargeting advertising as well as you know, putting some
pressure on the political party system within Germany to obey
(13:40):
the laws that these political parties are actually, you know,
supposed to uphold in the first place. So yeah, pretty
big expose when you look at it in that term,
and I'm very curious where it goes from here. Okay,
we've got some more news to cover before we get
to that. Let's take a quick break. Russia is gearing
(14:12):
up for its own presidential election that will happen in
twenty twenty four, and now election officials in Russia have
a new directive ditch the iPhone. So Reuters has reported
that the Kremlin has ordered all officials connected with the
twenty twenty four Russian presidential election, which I am sure
will be absolutely straightforward and fair, we'll have to stop
(14:36):
using Apple iPhones, reportedly due to concern that Western intelligence
agencies could be working with Apple, or maybe even without
Apple's knowledge and participation, to leverage iPhones in order to
gather intelligence. I mean, this could sound familiar because we
certainly have worried about Russia doing the exact same sort
(14:57):
of thing, not through the iPhone, but through various means
to inject misinformation and to affect elections across the United States,
to undermine the democratic process. So Russian officials are kind
of worried that the same thing's going to happen in Russia. Now,
I don't have a lot of faith in the fairness
of the Russian election system, if I'm being totally honest,
(15:20):
But I also think that perhaps the officials have a point.
Year I don't think that Apple would work so closely
with US authorities on this. I don't think Apple would
comply and allow their phones to be turned into surveillance devices.
Apple has a history of resisting US government efforts to
(15:42):
leverage Apple's abilities to do this kind of thing. Apple
famously resisted the FBI's efforts to create a backdoor entry
point for Apple's systems, saying that no, that compromises user
security and privacy, and it compromises our reputation with our customers.
So I don't think Apple would necessarily be on board
(16:04):
for any official attempt to surveil a foreign election process. However,
we have seen companies, surveillance companies creating apps that exploit
vulnerabilities and iOS devices in the past. You know, we
saw that with an Israeli company creating the Pegasus app
(16:24):
that would turn an iPhone into a spy device just
by sending a message through I Message. Apple has since
patched out that vulnerability, But the point is we have
seen that in the past, and really anytime you're talking
about relying upon a device that's going to send data
out of the country you're in to another country that
(16:47):
might be, if not outright hostile, at least wary of
the one you live in, there are concerns, right because
if you don't have full control of the information chain,
then there are worries about how that could hack national security.
So while I don't think everything's on the up and
up and Russia necessarily, I do also understand the concern
(17:07):
about officials relying upon technology that is created in a
country that at least is wary of Russia, if not
outright hostile at times. So yeah, I get the security concern.
It's not really a surprise. I don't even think it's
really an indictment against Apple, because again, Apple does not
(17:28):
have a reputation of working alongside US government to that degree.
But allegedly all election officials will need to be free
of iPhones by April first, Stanford has introduced Alpaca AI,
which according to researchers, performs on a level similar to
GPT three point five that's the previous generation of open
(17:52):
aiyes language model. But the surprising thing is that these
researchers spend about six hundred dollars developing their Alpaca AI.
You know, open ai has had billions of dollars of
investments poured into it, and apparently these Stanford researchers were
able to achieve a similar outcome to the previous generation
(18:13):
of GPT for just six hundred bucks. So they started
with Meta's open source LAMA seven B language model. So
this is Meta's own large language model. It is not
the same one that open ai uses, and Meta allows
academics to access these models for a price. The seven
(18:35):
B model is the cheapest and smallest of the large
language models, and that's the one the researchers went with,
so they saved some money by going with that. Then
they used open AI's GPT to generate sample conversations around
fifty two thousand sample conversations to train to post train
(18:57):
their own AI model. So they made use of open
aiyes official GPT research tools to do that that cost
a few hundred bucks. Then they made some tweaks and
some adjustments, and they ended up with an AI language
model that can perform really well against GPT itself, like
(19:17):
almost at the same level. I think in something like
almost one hundred and eighty tests, ninety showed their AI
model outperforming GPT, and an eighty nine cases GPT outperformed
the AI model, so they're essentially neck and neck. As
New Atlas the website New Atlas points out, the research
(19:39):
shows that if you have the right tools at your disposal,
you can potentially create your own large language model for
AI and then build stuff on top of that, like
chatbots and whatnot. And while Meta limits access to LAMA
to academics and such, the problem is someone leaked a
(20:00):
code online. So while you officially have to be an
academic to get access to LAMA, if you're the unethical type,
you can just grab the code online. It is available
out there, and you can start to develop your own
language model and your own work based on top of that.
This means we could be at the beginning of an
(20:21):
era of chat bought explosions and AI activations, all powered
by similar but independent language models. It's entirely possible that
some or even many of these will lack the restrictions
that we see open aie putting on top of its
(20:41):
own products like chat GPT. You know, open ai works
hard to try and limit things like you know, racist
or misogynyistic or transphobic language from being generated through its tools,
and you know, obviously we've also seen issues with the
(21:03):
reliability of information generated through tools like chat GPT. So
the fear is that these new tools, which could be
pretty easy to make yourself based upon this research project,
won't have those safeguards in place, and that we could
see a lot more misinformation or abuse of such systems,
(21:25):
maybe using systems to try and trick people with phishing attacks,
all sorts of different tactics that could be harmful could
be coming down the road. Because we've already seen the
barrier to entry, it's not that high, and if you're
able to figure out a way to exploit that and
make a whole lot of money, maybe you'd be willing
(21:46):
to take that unethical route and create a whole new
generation of victims out there. Oh brave new world to
have such chat bots in it. And on the topic
of chat GPT recently above started showing users the title
and short description of chat GPT sessions that were not
(22:06):
their own. You would look at your history and start
seeing stuff that you definitely did not do. So users
were having the titles and short descriptions of their chat
sessions shared with other users without their consent, and this
was not by design. This is a good time to
remind everyone that if you are using chat GPT, you
should not share sensitive information in chat GPT conversations because
(22:31):
you never know when those conversations could become public. So,
if you were using chat GPT to I don't know,
maybe draft a resignation letter, but maybe you aren't seriously
considering leaving your job, you were just kind of toying
with the idea, and you have chat GPT draft a
resignation letter for your behalf. Well, that could end up
(22:51):
being awfully awkward. If your chat session title and description
then gets shared with somebody else but without your permission,
and now people have the impression that you are leaving
your job, that could create all sorts of complications in
your life that you probably don't want to have to
deal with. That's just one example. What if you were drafting.
(23:13):
I don't know, a love letter to your crush. I
mean like, there's all sorts of things that you could
have been using chat GPT to do, even just for fun,
that you probably don't want people to see. So in
my case, if people were to see my history with
chat GPT, they would probably see how I was trying
to get chat gpt to compose a Shakespearean sonnet about AI.
(23:36):
Or there was one time when I asked it to
create an episode of tech stuff. I was just curious
what would happen, and I told it to create an
episode of tech stuff about chat GPT, which I had
already done an episode about chat GPT. So it's not
like I was trying to get out of doing my work, y'all.
That's not what this was about. I was just curious
what would happen, Like would it actually sound like one
(23:58):
of my episodes? The answer is no, it did not
sound anything like one of my episodes. It created an
interview with a fake chat GPT representative. So if I
had actually recorded the episode, I would have had to
invent a person who supposedly worked at open AI who
would be talking about chat GPT in an authoritative way
(24:19):
which seems unethical at the very slightest, but anyway, Open
Ai shut down chat GPT for a short while yesterday
evening and they took the chat history offline for a while.
They are reportedly starting to roll that back into service,
but it might be a while before all users are
(24:40):
actually able to access their chat histories, so there may
be a while before you're able to see back on,
you know, the stuff you have been talking about with
chat GPT in the past. The verges. Tom Warren has
a great article titled Microsoft's new Copilot will change Office
documents Forever. This kind of again falls in line with
(25:02):
AI and chatbots. So Tom Warren was in a Microsoft
Teams meeting with Microsoft reps to talk about Copilot. That's
Microsoft's AI feature that's going to be incorporated into all
office products moving forward. During the meeting, the reps turned
Copilot on to do stuff like take notes about the
(25:25):
Microsoft Teams meeting. At the conclusion of the meeting, Warren
actually received these notes, which included a recap of all
the questions he had asked during the interview, and it
sounds like Copilot can help users take advantage of office
products as if the user was an elite user. You
probably maybe you are an elite user in one or
two office products, or maybe all of them, or maybe
(25:47):
you know someone who is, like someone who just knows
all the different tricks to create the most snazzy and
effective presentation, or they're able to create really help full
graphs and charts out of Excel sheets, like there's an
art to it. You know, you can learn the basic
steps and do a fine job, but to do something
(26:10):
really effective requires a little more work. And you know,
it sounds to me like Copilot can actually help you
achieve those things and not only help you do them,
but also teach you how to do them yourself. So
it's not just like a magic wand that makes things better,
but a tutorial tool that can help you learn how
(26:31):
to take advantage of features that maybe you never even
knew existed. It's kind of like the old Microsoft Clippy,
but on steroids. Warren also mentions that the reps from
Microsoft acknowledge that the tool is good, but it's not perfect,
that it can make mistakes, it can generate stuff that
isn't appropriate, and that human review and interaction is still
(26:54):
very much a needed part of the process. That you
can't just step back and let office do all the
work for you. This is a collaborative approach to work,
not one where AI takes over for you. I recommend
you pop on over to the Verge and read the
piece to learn more. I do think the messaging from
Microsoft is pretty good here, that these tools are meant
(27:15):
to augment, but not replace, your own skills. That is
the approach to AI that I find the most intriguing
and certainly far less concerning than the doomsday scenarios we
often hear about about how AI is going to take
all of our jobs. The Verge also has a piece
titled text to Video AI inches closer as startup Runway
(27:35):
announces new model and yeah, the article is pretty much
what sounds like. Just as doll E and mid Journey
can generate images based on text inputs, Runway can generate
short snippets of video, and soon Runway Gen two will
release and show the progress of this technology. It is
not a high fidelity or photo realistic experience. So far,
(28:00):
the video I've seen has been in a fairly low
resolution and has elements in it that don't look real,
so you can tell it's like computer generated. But the
demonstrations are pretty fascinating. You just type in a simple
description of what you want to see, and you know,
the more descriptive you are, both about the scene you
want to see and the kind of camera work you
(28:21):
want will affect the clip that Runway generates. Runway also
offers a suite of aipowered video editing tools that can
do stuff like help remove a background from a video
or stabilize a camera and all that kind of stuff.
But the yeah, Gen two, the big thing is that
it's able to just create entire video clips. Obviously, AI
(28:44):
generated video opens up new opportunities for abuse, So it's
not all wine and roses here, like, there are some real,
potentially harmful uses of this technology. But the actual video
so far again aren't. They're not photo realistic, and they
sometimes include stuff just you know, gives it away that's
a computer generated clip. So it's not quite to the
(29:05):
point where we can never believe anything on video ever again,
But we're getting there. Okay, I've got a few more
stories to cover before I get to that. Let's take
another quick break. Last week, I mentioned that after bay
(29:27):
Do demonstrated its own Ernie chatbot, investors were initially disappointed
in the company and we saw stocks fall. They recovered
a bit afterward, as Bid opened up Ernie to a
small group of testers who demonstrated their own interactions with
the bot, and so investors started to come around a
(29:48):
little bit on that and Reuters reports that the chatbot
has some pretty big blind spots that are not going
to surprise you in the least. Namely, if you asked
the chatbot questions about, say, Shi Jenping, the president of China,
then Ernie Bott will say it hasn't learned how to
answer those kinds of questions. Similarly, Reuters found that if
(30:09):
they asked about, you know, sensitive or taboo subjects, such
as the events at Tianamen Square in nineteen eighty nine,
Ernie would suggest changing the subject. This is pretty much
in line with how China limits and sensors Internet content
on politically sensitive subjects, and in the interest of fairness,
I should point out that open AI's version of chat
GPT has its own limitations. When that model was first
(30:34):
put out, it would not answer questions about developing news
or breaking news or anything, because it would only have
access to information from twenty twenty and earlier. As I recall,
so there were limitations of what it could talk about,
so it wasn't necessarily censorship so much as it was
open AI trying to avoid issues where their tool could
(30:57):
be used to frame news in a particular perspective or bias,
so it's not unheard of here either. I should say.
Also Microsoft's being incorporation of chat GBT lifted those limitations
you could ask about developing news on that platform. While
(31:18):
TikTok continues to argue for its own existence in the US,
the company has introduced new rules about AI deep fake
videos a lot of thematic similarities in our stories today.
The new rules say that synthetic and manipulated media must
be clearly disclosed as such. So, in other words, if
a video uses deep fake technology, it needs to make
(31:39):
that clear in the video itself, Otherwise it might be
seen as attempting to pass itself off as the legit article,
and thus will be subject to moderation like being deleted
or whatever. Further, deep fakes of private individuals are strictly
off the table. You can't make a deep fake of
some private citizen. That is against the rules. The only
(32:01):
deep fakes that are allowed are of public figures, and
again they have to be disclosed as being deep fakes.
And in deep fake videos, you cannot make the public
figure do stuff like endorse something, because that is a
violation of the rules. It is creating a perceived relationship
where none actually exists. You also can't have a deep
fake version of a celebrity doing stuff like calling for
(32:24):
violence or spreading hate speech. These are against the terms
and conditions of TikTok, so those would also be in
violation of the rules. I think this would be an
important step even if TikTok were not facing political pressure
from all corners at the moment, but certainly as there
is unprecedented security directed at TikTok right now, these new
(32:46):
rules I think are a necessity, especially in the wake
of descriptions of stuff like the Runway Gen two tool
that's coming out. Deep fakes are already a big problem.
The ability for AI to replicate voices is a big problem.
It's just going to get worse, so creating rules about
these things is really really important. Italy's AGCM, which is
(33:10):
a consumer watchdog group, has launched an investigation into TikTok,
alleging the platform does not do enough to stop videos
that promote harmful content, ranging from videos that promote eating
disorders to encouraging suicide. So these videos violate the terms
and conditions of TikTok. It is against the rules, But
(33:30):
what the watchdog is saying is that TikTok isn't doing
enough to enforce those rules. The AGCM is focusing on
TikTok Technology Limited. That's a branch of TikTok that's based
in Ireland. It's responsible for consumer relations in Europe. And
the event that apparently prompted this investigation was a string
of viral videos about a quote unquote French scar challenge.
(33:54):
The fact that I had to look up what the
heck that was is really a bummer and if you're curious.
Essentially it involves pinching the skin along a line to
create the illusion that you have a faded but permanent scar,
typically on your face. But this can actually cause permanent
damage itself. If you're pinching so hard, you can actually
create a permanent line on your face, and that fake
(34:16):
scar essentially becomes a real scar. That's not super cool.
AGCUM express concern not just about this particular trend, but
the overall problem of such content going viral without moderators
stepping in to enforce the rules. Amazon is laying off
another nine thousand employees. This is after a previous round
of layoffs affected eighteen thousand staff, which means this year
(34:38):
Amazon will have laid off about nine percent of its
global corporate workforce, a lot of people, you know, twenty
seven thousand people. The upcoming layoffs are expected to impact
divisions like Twitch, Amazon Web Services, or AWS, which previously
was one of the most profitable divisions within Amazon, and
(34:58):
advertising departments, among others. CEO Andy Jassey cited an uncertain
economy as the reason the company is holding these layoffs.
It's a popular phrase to yell right now. Apparently around
four hundred, actually more than four hundred of those cuts
will affect Twitch, which recently sought CEO Emmett Sheer resign
(35:19):
his position. It's hard for me to feel bad about
a company as huge as Amazon, but I definitely do
feel empathy for all the people affected by these layoffs,
both the ones directly affected and their coworkers who then
have to figure out how to proceed without the input
of the people who were just laid off. And I
definitely hope all those who are directly affected by these
(35:41):
layoffs land on their feet in this increasingly uncertain economic climate.
That is a very rough position to be in, and
I wish you all the best. Amazon itself is facing
several government investigations, and Politico reports that some of these
could be leading to lawsuits brought against Amazon by the
(36:02):
federal government of the United States. So Amazon's already involved
in state level lawsuits regarding things like consumer laws and
accusations of anti competitive practices such as promoting Amazon owned
brands over competitors when you go into search results, but
these new lawsuits would be federal level charges, not state level,
(36:23):
so it would significantly turn up the heat on Amazon.
Some of the potential moves could force Amazon to divest
itself of companies that it has already acquired. One of
them could potentially push Amazon to divest itself of I Robot,
that's the company that makes the Roomba vacuum cleaners. Amazon
may also face charges alleging that Amazon products like the
(36:45):
Ring Security doorbell system potentially violates the Children's Online Privacy
Protection Act or KAPPA, and there are other investigations that
could conceivably lead to different types of lawsuits and it
sounds like it's a multi pronged attack and a sign
that the US government is taking a much harder regulatory
stance when it comes to big companies in general, but
(37:08):
specifically big tech companies. Okay, that's it. That's the news
for Tuesday, March twenty, twenty twenty three. Hope you are
all well. If you would like to reach out to me,
do so on Twitter. The handle for the show is
tech Stuff HSW or you can download the iHeartRadio app.
It's free to download, free to use tight tech Stuff.
(37:29):
In the little search field, you'll see the tech stuff
podcast page pop up. Click into that you'll see a
little microphone icon, and if you use that, you can
leave a voice message up to thirty seconds in length
to let me know what you would like to hear
in the future, and I'll talk to you again really soon.
(37:50):
Tech Stuff is an iHeartRadio production. For more podcasts from iHeartRadio,
visit the iHeartRadio app, Apple Podcasts, or wherever you listen
to your favorite shows.