All Episodes

May 25, 2023 20 mins

Twitter hit some technical snags yesterday while Presidential hopeful Ron DeSantis announced his 2024 campaign. OpenAI's CEO issues a bit of a warning to the EU regarding AI regulation. And Sony has a new gaming peripheral on the horizon.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tech Stuff, a production from iHeartRadio. Hey there,
and welcome to tech Stuff. I'm your host, Jonathan Strickland.
I'm an executive producer with iHeartRadio and how the tech
are you. It's time for the tech News for Thursday
May twenty fifth, twenty twenty three. So, first thing, I

(00:28):
want to address our updates to a story I covered
on Tuesday, which was about an incident in Cardiff in
Wales where a pair of teenage boys died in a
traffic accident and word had gotten around that they were
in they were being pursued by police and that that

(00:49):
contributed to this accident. It caused the accident and there
were riots that followed. Now, initially reports were that the
police weren't involved at all and that this was misinformation
and it spread rapidly and that ended up creating the
situation that ultimately escalated into a riot. Since then, where

(01:09):
it has gotten out that CCTV footage has shown that
there was a police van that was following the boys.
The police in Cardiff have said that there still wasn't
a pursuit, but the footage shows that there was a
police van following behind the boys. The police have given
a timeline that suggests that the van actually turned off

(01:31):
from following them before the crash happened. I don't know
what the truth is at this point, but it is
important to follow up on it because obviously the initial
statement was about this being misinformation that then spread rapidly
throughout the community, and it may turn out that it's
not misinformation at all. So we'll have to continue to

(01:52):
watch the story and see what happens. Obviously, if it
turns out that the police were misrepresenting what was going on,
it is going to make a situation where police relationships
with the community are already on shaky ground much worse.
So we will keep an eye out to see how
this story continues. Yesterday, Twitter attempted to serve as the

(02:17):
platform for GOP presidential hopeful Ron DeSantis as he made
his announcement that he was officially launching a presidential campaign.
This had been long anticipated, yesterday was just the formal announcement. Now,
I say Twitter attempted to serve as a platform because
the Twitter space that Elon Musk created specifically for this

(02:40):
event became a cacophony when audio issues made it impossible
for anyone to say anything without it being a massive,
incomprehensible mess. Musk later said the problem was that Twitter's
servers were overloaded. Several tech news outlets have pointed out
that this event well well attended, virtually it wasn't a

(03:01):
small event. It still didn't come close to approaching other
large online only events that had, you know, several times
more people in attendance, but had fewer technical glitches. Whatever
the cause of the glitches, the launch did not go smoothly.
Now I'm not going to comment on the political side
of this, and you're welcome. Instead, I just want to

(03:24):
say the problems, the technical problems, really didn't come as
much of a surprise to at least the grouchy people
like me out in the tech space, because Musk effectively
gutted Twitter over the course of his ownership of the platform,
and since then, essentially a skeleton crew has had to

(03:45):
scramble to meet whatever sometimes apparently arbitrary goals Musk comes
up with, or at least that's how it looks on
the outside, And I fully admit I am looking at
this from the outside. I could be one hundred percent
off base with these observations and assumptions. So I don't
want to suggest that my view is the the sum

(04:08):
total of the truth. Just from the outside, it looks
like Musk keeps wanting Twitter to tackle these huge projects,
but with such a reduced workforce that it's it's kind
of setting the platform up to fail, which is unfortunate
because I'm sure the people who are actually working on it,
you know, they don't want things to fail, and they're
working hard, but they're doing so with a lack of

(04:33):
assets and resources. All right, now it's time to talk
about AI again, and to talk about Sam Altman, CEO
of open AI again. Now you might remember that very
recently Altman appeared before US Congress and said that AI
is a field where regulation is needed. And at the time,

(04:57):
I was kind of hopeful that this meant Altman was
really sincere in that belief and that he was going
to take an active role to really draft useful regulations.
But then you could also say, well, yeah, but Sam
Bankman freed said very similar things about cryptocurrency and regulations,
and then look where he's at right now, so that

(05:18):
maybe you shouldn't take these these statements at face value.
So over in the EU, Altman's tune is slightly different
than it was in the US. So I would say
that the EU has people who are far more skeptical
and concerned about AI than what you typically see here

(05:40):
in the United States. Not to say that people in
the US are totally cool with AI and they have
no worries, But in the EU, I think it's it's
more prominent. And in the EU, the EU has passed
a law called the AI Act, which categorizes artificial intelligence
into three different buckets according to perceived risks. So at

(06:01):
the very very top of this are unacceptable risks, So
these would be AI applications that would potentially violate citizens' rights.
So these would be applications that the EU would just
outlaw period like these, you cannot use AI to do
these sort of things, So this would be something like
China's social scoring system for example. That's where each person

(06:24):
would receive a score which really relates to how useful
and loyal they were according to the state. That would
be right out as a use of AI. Under this
are is a category called high risk AI systems. These
could potentially be useful, so they could have a social

(06:46):
benefit to them, but they're also potentially harmful. So because
of that anything that fell into this category would need
to follow a strict set of rules and regulations in
order to be legal in EU. So, in other words,
a high risk system would be allowable under EU law,
provided that the companies that were making and using those

(07:09):
systems abided by the rules and remain transparent and such.
Altman says that chat, GPT and the GPT Large Language
Model would fall into this high risk category as the
EU has defined it, and he objects to that. He
says that shouldn't be the case. And he also seems

(07:30):
to think that the rules and regulations are too restrictive
and that they are going to harm small companies that
wish to integrate AI. Now keep in mind those small
companies integrating AI. I think Altman's looking at those small
companies as customers, right, These are companies that would be
essentially licensing open AI's platforms for work. So that's you know,

(07:55):
he has a vested interest in this. So I guess
what Altman appears to be saying. This is my interpretation
is that he's all for regulations if he has a
heavy hand in making them so that you know, they
don't actually impede his business. But if a country or
a European Union creates rules independently, He's ready to take

(08:16):
his toys and go home. Or really, to quote him,
he said, quote, if we can comply, we will, and
if we can't, will cease operating. We will try, but
there are technical limits to what's possible. End quote. Meanwhile,
Brad Smith, Microsoft President, called on US lawmakers to form

(08:39):
rules that would limit or prevent integrating AI into critical
systems like say the US power grid or various water infrastructure,
that kind of thing. He also called for AI companies
to be held accountable if and when their tools cause problems.
And considering how Microsoft has really invested heavily in open
AI and integrated GPT into its Microsoft Being product, this

(09:04):
is a pretty interesting take. But Smith said, quote this
is the fundamental need to ensure that machines remain subject
to effective oversight by people, and the people who design
and operate machines remain accountable to everyone else's end quote.
So here's the thing I agree with that. I think

(09:25):
that is a reasonable thing to call for. I think
the world in general needs to come up with rules
for the design and integration of AI and limitations to
that right, like where AI should and shouldn't be used,
and how it should and shouldn't be used. And I
think those rules need to require companies to be as
transparent as possible. Just getting more and more complicated as

(09:49):
these AI models get more and more complex, and also
the rules need to make sense, they need to be effective.
They need to prevent companies from just having the protect
of rules being in existence as they continue to develop AI.
So by that I mean this, Okay, So if there's
no rule, like there's an absence of rules, companies can

(10:11):
actually be a little nervous as they operate. Right, you
have no rules, you have no oversight, But that means
that if you do something really bad, there's going to
be a big ruckus and perhaps an overreaction to the ruckus,
which means that you end up harming yourself more than
you would have if you just made a set of
rules and abided by them. Now, when you make rules, well,

(10:35):
that means that you do have these these rules, these restrictions,
but usually not everything's covered, right. There are often gaps
or loopholes. So if you do something unanticipated but something
that isn't covered by the rules, your defense is, well,
there's nothing in the rules that says, you know, I
can't make AI that automatically denies credit to people who

(10:56):
come from such and such a place, because historically we
know all those folks default on loans or whatever. Rules
that are intended to protect the public sometimes have an
odd way of protecting the perpetrators of bad deeds. That's
what I'm saying. Like, if there were no rules, then
you might have a much larger reaction. If there are

(11:18):
rules and whatever you did wasn't covered by them, then
you can say, hey, I was following the rules. Yeah,
this bad thing happened, but I wasn't breaking the law.
So creating rules does need to be done, but it
needs to be done with care and critical thinking. And
it also has to be an ongoing process. It's not
something you do once and then you walk away. According

(11:38):
to a research firm called Watchful Technologies, TikTok has been
testing an AI chatbot in Apple mobile devices. In the
TikTok app, the chatbot is called Taco and I don't
know if it only works on Tuesdays. Oh, hang on,
it's actually spelled Takoh. The chatbot is meant to help
with discovery, so users apparently activate this chatbot and converse

(12:02):
with it to help find stuff they like or to
answer questions they have. They might have a question of
what does it mean if my toilet won't flush or whatever,
and then the AI agent will find videos that somehow
relate to that kind of thing. Now, according to the researchers,
Taco's purpose seems mostly just to keep people on TikTok

(12:23):
longer and keep them watching videos. So it's not like
Taco's posing as a best friend or something like that,
but rather augmenting the recommendation algorithm to find stuff that's
going to maximize users time on the app. Okay, we're
going to take a quick break. When we come back,
we've got some more tech news to cover. We're back,

(12:51):
so yesterday. On Wednesday, Meta held another round of layoffs,
hitting somewhere in the neighborhood of six thousand people time.
The jobs affected were mostly on the business side of
Meta's operations, as opposed to, you know, like the tech side. Reportedly,
morale is in pretty bad shape in Meta. There were

(13:14):
a couple of articles I saw this morning that we're
saying things like Meta employees are trying to avoid being
included in a future round of layoffs by essentially creating
work like they're manufacturing work for them to do. It's
kind of like the bosses coming look busy kind of mentality.
Others at Meta appear to have no motivation to work

(13:36):
at all, because you know, when you don't know whether
or not you're going to have a job the next day,
it can really do a number on you. I have
been there, and it is tough now. According to tech Crunch,
this most recent round of layoffs should theoretically be the
last major layoffs for the foreseeable future. Meta has indicated

(13:58):
that it was aiming to eliminate ten thousand jobs the
spring total across layoffs, and this one was the second
round of layoffs, so they have definitely hit that ten
thousand mark. And since late last year, Meta has handed
around twenty one thousand staff they're walking papers and has
also put a a hiring freeze on thousands of open positions.

(14:21):
Mark Zuckerberg, Meta's CEO, has called twenty twenty three the
Year of Efficiency, indicating that Meta had a bloated workforce
that wasn't really representative of the actual amount of work
that needed to be done. In other words, we got
more people than we have work for them to do. Also,
the company continues to face some pretty hefty costs which

(14:41):
might be motivating some of these cutbacks. That includes a
more than one billion dollar fine that came down from
the EU earlier this week. To learn about that, just
listen to Tuesday's episode of Tech Stuff. Apple has announced
a truly ginormous deal with the company Broadcom that will

(15:02):
see these two companies making five G components in the
United States. So, according to an article in courts, Apple
will invest somewhere in the neighborhood of four hundred thirty
billion dollars to boost US manufacturing, largely in the five
G realm, but the connectivity space in general. So over
the last couple of years, Apple has been looking for

(15:24):
ways to decrease its reliance on Chinese manufacturing for various components.
There are a lot of different reasons to pull out
of China, ranging from optics because it doesn't always look
good to be doing business with a country that has
a pretty awful human rights record all the way to
practical things like supply chain issues if you want to

(15:45):
be super cynical. But it's not really easy to just
extract from China, largely because companies like Apple depend upon
the lower, the much much lower costs of labor in
China to keep production costs down. Recently, a manufacturing facility
in India actually announced it would no longer manufacture Apple

(16:06):
components because Apple's demands regarding costs of production meant that
this Indian company their profit margins were nearly non existent.
I mean, potentially the company would end up losing money
to make stuff for Apple because Apple was saying, We're
not going to pay you more than X amount, and
meanwhile it costs why amount to make the stuff. So

(16:32):
there are companies in other places where Apple has previously
tried to move to avoid working in China that have
already kind of bulked because of this issue. You know,
the fact that unless Apple is willing to pay the
amount that is acceptable within that country for labor, then
it's just not going to get done. So a move

(16:53):
out of China is likely going to mean increased prices
on items in the long run. Global economics are super complicated.
The attorneys general for several states here in the United
States have banded together to level a massive lawsuit against
a telecommunications company called Avid Telecom, and, according to the lawsuit,
this one telecommunications company facilitated billions of robocalls to people

(17:18):
who had previously signed up on the national Do Not
Call Registry. So that registry is supposed to protect the
people who sign up to it from receiving nuisance and
unwanted calls, primarily telemarketing calls, but also things like scams
and stuff. Citizens can actually designate the types of calls
that they are willing to receive. So if you want,
you can go to and you're in the US, you

(17:39):
can go to the Do Not Call Registry, sign up
for free, and even indicate which calls you don't mind getting. Anyway,
the problem is that, at least according to this lawsuit,
Avid Telecom allowed telemarketing calls and scams and such to
go through when they absolutely should not have been able to.
The lawyer for Avid Telecom denies the charges, saying the

(18:00):
company acted in accordance with the law and expressed disappointment
that all these Attorneys general didn't just sit down for
a civilized discussion before bringing a nasty lawsuit. We'll have
to see where this goes from here. Sony held a
PlayStation event this week, a PlayStation showcase event, and unveiled
a new handheld gaming device that will be capable of

(18:22):
playing any non VR game streamed from a nearby PlayStation five.
So essentially, you're running the game on your PlayStation five,
you're just streaming it to this handheld device that's within
a certain range of that PS five. It doesn't even
have an official name yet, but the internal name for
the handheld is Project Q. So Project Q will be

(18:44):
dependent upon a console. It will not be able to
play games natively. You can't just take it on the
go like you would with a Nintendo Switch. It kind
of looks like someone took a modern PlayStation controller, cut
it in half, and then shoved an eight inch screen
in between the two halves. I'm not crazy about this design,
but I'm also in the minority of folks who don't

(19:06):
like PlayStation controllers in general. I know I'm bonkers. Anyway.
That's about all we know so far about Project Q.
Sony didn't have information on how much folks should expect
this to cost or when it will come out. If
it's on the more expensive side, that's kind of a
deal breaker in my book. But then I also don't
own a PS five yet. I might actually change that
next month. Because I'm thinking about buying myself one as

(19:28):
a Berthday present, but I'm pretty sure i'll skip Project Q. Finally,
South Korea successfully launched a rocket carrying eight satellites earlier today.
The rocket, called Nuri, launched in the afternoon in South Korea,
and according to South Korea's Ministry of Science, it achieved orbit.
The office also reported that the primary satellite on board

(19:49):
the next SAT two has already established communications with Korea's
station in Antarctica. And as for the other satellites, at
the time of this recording, there was actually questions if
one of them microsatellites failed to deploy properly from the rocket.
I don't have more information on that just yet, but
South Korea is now the seventh country to achieve a
launch carry more than a ton of payload into orbit.

(20:11):
And that's it for the tech News. I hope you
are all well and I'll talk to you again really soon.
Tech Stuff is an iHeartRadio production. For more podcasts from iHeartRadio,
visit the iHeartRadio app, Apple Podcasts, or wherever you listen

(20:32):
to your favorite shows.

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.