All Episodes

February 28, 2023 37 mins

We've got a ton of stories relating to AI to talk about today. Plus, VW's Car-Net service refuses to help detectives track down a stolen car (with a toddler inside it) unless they first pay the $150 reactivation fee. Ford proposes a future where cars repossess themselves. And everyone is banning TikTok.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tech Stuff, a production from iHeartRadio. Hey there,
and welcome to tech Stuff. I'm your host, Jonathan Strickland.
I'm an executive producer with iHeartRadio. And how the tech
are you. It's time for the tech news for February
twenty eighth, twenty twenty three. For such a beastly month

(00:26):
as February twenty eight, days as a rule are plenty.
Shout out if you recognize that reference. It is another
AI dominated news day, though I promise there are a
few stories I'll be covering that do not have AI
as the central topic. But starting off Elon Musk repartedly
wants to found a new AI lab to compete against

(00:49):
open Ai. Now, for those of y'all who remember my
episode about open ai, maybe this comes as a surprise
because Musk was actually one of the original co founders
of open ai in the first place. But back then,
open ai was a not for profit organization, and it
had the goal of using an open source approach to

(01:09):
developing an evolving artificial intelligence in a way that ideally
would be universally beneficial. You know, none of this AI
that benefits one group at the expense of everyone else.
Kind of nonsense. But then Musk stepped down from the
board of open Ai because ostensibly because Tesla was also

(01:29):
developing AI of its own and there was a potential
conflict of interest, though Musk has later said that he
was critical of open AI's direction, and since that day,
Musk has gone on to criticize open Ai, particularly once
the organization founded a for profit arm ostensibly to help
fund the nonprofit part, and Musk has also criticized chat GPT,

(01:53):
saying open ai is quote training AI to be woke
end quote Yeah, Musk, I get it. You're a billionaire
white guy. Why not punch down because there's nowhere else
beat a punch right? What a jerk. I'm, of course
being a little facetious. My opinion of Musk is obviously
pretty low, but that's beside the point. I know no

(02:15):
one really wants to hear that, so I'll drop it.
Musk wants to create this AI lab and pursue AI
chat butts that are unfettered by the chains of wokeness.
Considering how Musk has shown that his free speech absolutist
stance isn't actually in alignment with his behavior, you can
see also how Twitter had banned mention of competing services

(02:37):
like massadon Instagram and others on its platform. To see
evidence of this, I suspect all of this is going
to come back to haunt him should he actually achieve
this goal. Meanwhile, Tesla investors are likely further aggravated to
see that the company's CEO continues to direct his attention
to yet another endeavor rather than address problems with Tesla.

(03:00):
And then over at open Ai, the company is introducing
a platform for developers that will give them access to
open AI's tools, namely the company's machine learning models. So
this is the very powerful AI compute systems that to
build out yourself would require millions and millions of dollars.

(03:21):
Open ai is calling this offering Foundry, and it means
that people who have an idea for apps or services
that would feature AI in some way could have access
to compute assets without having to build them all themselves.
That is an enticing offer for developers who might otherwise
have a great idea but they lack the funds to

(03:44):
be able to execute upon it. Details are somewhat scarce.
Open ai has not announced when we might expect Foundry
to launch, but we do know that it's going to
set developers back a pretty penny, actually a whole bunch
of pretty pennies to access these services. According to tech Crunch,
three months of access to the lightweight version of GPT

(04:04):
three point five would set you back seventy eight grand.
That's seventy thousand dollars for three months of access wowsers.
So this is well beyond the reach of your average
home developer. We're really talking more about startups and companies
that have a real shot at seeing a return on investment,
but they lack the infrastructure or money to build out

(04:25):
their own machine learning systems. Over at Meta, Mark Zuckerberg
announced that the company is pursuing its own AI strategy.
I'm just gonna just read out his Facebook post because
it tells you everything you need to know. Quote. We're
creating a new top level product group at Meta focused
on generative AI to turbo charge our work in this area.

(04:48):
We're starting by pulling together a lot of teams working
on generative AI across the company into one group focused
on building delightful experiences around this technology into all of
our different products. In the short term, will focus on
building creative and expressive tools Over the longer term will
focus on developing AI personas that can help people in

(05:12):
a variety of ways. We're exploring experiences with text like
chat in WhatsApp and Messenger, with images like creative Instagram
filters and ad formats, and with video and multimodal experiences.
We have a lot of foundational work to do before
getting to the really futuristic experiences, but I'm excited about

(05:32):
all the new things will build alone the way end quote.
So it sounds like Meta like Musk is on the
path to create its own AI approach, or perhaps Meta
will turn to Open AI to tap into the power
of chat GPT. It's still early days and we're not
done with AI yet. Snapchat is also jumping into the

(05:53):
AI game with a product it calls my AI. Only
subscribers to Snapchat Plus will have access to this. According
to Snapchat, the AI will do stuff like if you
ask it, it will give you recommendations for presents that
you could buy friends and family. I mean, presumably Snapchat
would scan everything you've ever said to these people and

(06:14):
start to pull suggestions out of that, or it might
give suggestions about things you could do with somebody, like hey,
I want to hang out with so and so, what's
a good activity and they might say, well, they really
like the outdoors, how about you go hiking that kind
of thing. You can also apparently name this AI. However,
the company also owns up to the fact that chat GPT,

(06:35):
which is the system powering my AI, isn't always reliable.
We've talked about that a lot over the last couple
of months, or as Snapchat actually put it, quote, as
with all aipowered chat bots, my AI is prone to
hallucination and can be tricked into saying just about anything.

(06:56):
End quote. Oh. Also, all that communication with AI is
logged for the purposes of review and development, so anything
you do say to my AI is being recorded. So
that means it's best not to confide in my AI
all your secrets, like Grandma's chocolate chip cookie recipe or

(07:16):
where you hit the bodies, because someone somewhere could be
reading over that log. The whole announcement dedicates a surprising
amount of space that warns users that the tool might
not work as intended, and that almost raises the question
about why they're deploying this tool so early. If they're
taking this amount of effort to say, hey, y'all this

(07:40):
thing might go heywire, and they really point out all
the different ways that you can flag stuff so that
people can review it and thus address any issues that
pop up. Like it's it's a significant amount of the
announcement that is all about covering their butts, so to speak.
So I suppose one answer as to why they're deploying

(08:03):
it so early is that this turns snapchat plus subscribers
into QA testers that they don't have to pay. Right,
these aren't employees. They could turn the community into the
QA team. It's the basic concept behind open beta programs. Right.
You find out by using a wide deployment where the

(08:25):
problems are, and then you fix them. Before you know,
you deploy it to an even larger audience in the future.
Jordan Parker Herb wrote a piece for Insider titled I
asked Chat GPT to write messages to my tender matches.
A dating coach said they gave off a creepy vibe. Now,
I don't think anyone's really surprised by that. Heck, if

(08:46):
you again turned on Nothing Forever the AI powered endless
Seinfeld episode, you would probably guess this would be the
inevitable outcome. Because those episodes can get a little unsettling
as well. And this piece in an Cider indicates that
chat GPT's responses fell into some pretty common traps when
one attempts to navigate the complicated world of modern dating,

(09:10):
namely that chat gpt wrote responses that are way too long. This,
by the way, it tells me that if I were single,
I would probably be single forever at this point, because
come on, y'all, there's no denying. I will use a
thousand words when ten would do just fine. So I
would never do well on these kinds of apps. Also,

(09:31):
chat gpt leaned heavily on its emoji game, and as
the title of the piece points out, some of the
responses came across as creepy. Also, the coach pointed out
that it's best for folks to just be themselves when
using dating apps because otherwise your perspective date will get
the wrong impression, and that pretty much guarantees things aren't

(09:52):
going to go well. Like if a tender matchup thinks
that the response was really cool, but the really cool
response was written by AI, and then they meet you
and you do not have the same vibe. That's a problem.
I feel like I'm describing almost every romantic comedy that
was written in the eighties and centered on teenagers, except

(10:15):
that instead of it being AI, it's typically you know,
the well meaning popular kids who are attempting to transform
a person so that they become popular. It feels like that,
except I guess I'm describing the next generation of teen
centered comedies. I would not be surprised if we find
a movie like that. Anyway, I highly recommend reading the

(10:37):
actual article. Some of the AI generated examples that Jordan's
shares are absolutely hilarious in that awkward, cringe e sitcom
kind of way. Again, it's a piece in Insider, and
it is called I asked chat gpt to write messages
to my tender matches. You could just search for that.
I recommend reading it. It's good for, you know, a laugh.

(10:58):
Of course, AI goes well beyond chatbots and machine learning.
We've talked about other uses of AI and the dangers
that they can present. One example that springs to mind
because it comes up time and again is facial recognition technology.
Even if the application of this technology is benign, there

(11:18):
are frequently problems with the underlying tech, unintended bias, has
been a huge issue with facial recognition technology for years,
ranging from some services being unable to detect a person
of colors face properly to misidentification, which can lead to
traumatic experiences such as being targeted by law enforcement simply

(11:39):
because a computer can't tell the difference between different people.
Frequently again people of color, and they are disproportionately targeted
and affected by such technology. While last week New Scientists
presented another example, one with truly grim and terrifying implications.
The magazine found a contract between a tech company called

(12:01):
real Networks and the United States Air Force. Real Networks
offers a facial recognition platform that they call Secure Accurate
Facial Recognition or SAFER, and the implication is that the
Air Force is incorporating this technology into its Unmanned aerial
vehicle or UAV program, you know drones. A lack of

(12:22):
information has led to some speculation, some of which I
think is definitely understandable and believable. After all, Special Forces
units have been involved in clandestine operations that are at
least difficult to separate from stuff like assassinations, and sometimes impossible.
Sometimes it's just outright an assassination. So it is not

(12:43):
a stretch to imagine a unit like a Special Forces
unit making use of a drone with this technology in
an effort to identify and acquire targets. But the potential
for misuse of such technology, let alone the chance that
the techs could make a mistake, has led critics to
raise the alarm about this approach, and I think that

(13:05):
is a wise reaction. Even if the tech works perfectly,
you still have to wrestle with the fact that people
can sometimes be the absolute worst. They can abuse technology
for their own purposes, and when it comes to something
as potentially lethal as this, that is a major problem. Okay,

(13:25):
we're going to take a quick break. When we come back,
we'll have a lot more tech news to cover. We're back,
and we're not done with AI yet. I do promise
we have other stories besides AI, but we've got a

(13:45):
couple more to get through, and one of the stories
we have is about how AI is complicated, not just
because of the technology, but because of people and the
way we react to AI and interact with AI. I
think that this is a truly fascinating topic that relates

(14:07):
heavily to both psychology and technology. So I want to
talk about a recent study out of my alma mater,
the University of Georgia Go Bulldogs. Nicole Davis, who is
a graduate student at UGA, participated in a research project
that I think is both interesting and has some upsetting
but not really surprising conclusions. The project brought together a

(14:31):
bunch of people and then ask them questions about stereotypes
that relate to white, Black, and Asian ethnic groups, and generally,
the response is indicated that people saw Asians as being
the most competent people and Black people the least competent
people for any given task. I guess it's a really

(14:54):
ugly stereotype, but it's also undeniably a pretty common one.
Then the users were given a task and it was
to try and find a way to reduce the expense
of a vacation rental, and they were going to make
use of an AI powered bot, a chatbot. They had
a little avatar representing the AI So these were cartoonish

(15:15):
avatars and there were some that were white, some that
were black, and some that were Asian in design, and
the users were later asked to comment on the bot's performance,
specifically how human and warm it was and how competent
it was at helping the user reduce the vacation rental cost.
Davis said, quote when we asked about the bot, we

(15:38):
saw perceptions change. Even if they said yes, I feel
like black people are less competent, they also said yes,
I feel like the black ais were more competent. Davis said.
This is an example of expectation violation theory, which pausits
that if someone enters into a situation and they have

(15:59):
low expectation and then their experience is a positive one,
they walk away feeling that it was an overwhelmingly positive experience,
not just that was good, but because it exceeded their expectations,
it was even better than that. Davis goes on to
say that more research is needed to find ways in
which bought representation can help to impact consumer perception in

(16:21):
positive ways, like perhaps breaking down barriers they might otherwise
have because of these stereotypes that they maybe unconsciously have
of different people. But this is obviously a complicated and
sensitive challenge. Amazon has been using AI to help monitor
delivery drivers for a while now, but this recently got

(16:42):
more attention when a TikTok user would the handle. Amber
Gertz gave an explanation of how the delivery truck's camera
systems monitor driver behavior. She is an Amazon delivery driver.
She created this TikTok that explains the whole thing, and
she says that the system logs violations if a driver

(17:03):
breaks protocol in any way. This can include stuff like
failing to come to a complete stop at a stop sign, which, hey,
that makes sense. This is like one of the biggest
violations a driver can commit, and you definitely need something
to help ensure drivers follow this process because I mean,
they're on the road all day, so they have the

(17:24):
potential for getting involved in collisions more than the average
person does. You know, the average person is not on
the road all day. And it also tracks whether or
not the driver has buckled their seat belt at the
conclusion of each stop, or whether or not they've gotten
out of their seat. However, the system will also trigger
if a driver takes a drink without first pulling over

(17:45):
to the side of the road to come to a
complete stop. So if you got your morning coffee with
you and you're an Amazon delivery driver, you have to
come to a complete stop before you can take a
sip of coffee or you will get you your image
captured and a violation will be hit on your profile. Also,
drivers aren't allowed to touch the center console without first

(18:07):
stopping because that's considered a distraction. And the cameras are
not providing a live feed for the whole day. It's
not like there's some security office within Amazon where there's
this one person looking at a wall of monitors trying
to keep up with all these different drivers. It would
be impossible to do that. Instead, AI incorporated into the

(18:28):
system monitors the camera view and captures video should a
driver do anything that violates these policies, and it's all
in the name of safety. As Amber Gert says in
the TikTok, pretty much every Amazon driver hates this system,
which includes multiple cameras set up within the vehicle and
also forward facing cameras to keep things like how far

(18:50):
away you are from the traffic in front of you.
But she also generously says this is all in an
effort to keep drivers and others also Amazon drivers can
dispute violation reports, and Bergarts even mentions a case where
a driver scratched his beard while he was driving and
the system mistakenly believed he was talking on a cell

(19:12):
phone and so dinged him with a violation, and so
he was able to dispute that and get it reversed.
Now I can honestly say I feel really conflicted about
this whole approach. On the one hand, this is taking
employee surveillance to the extreme, there's no doubt about it.
But on the other hand, the system has also allegedly
contributed to reduction and collision rates by thirty five percent,

(19:33):
and considering that collisions often result in injuries and property damage,
that's significant. And I kind of wonder what Ben Franklin
would have to say about all this, with his views
on liberty versus safety and all By the way, that
famous quote is more complicated than the quote itself would indicate.
I recommend looking into what he was talking about when

(19:55):
he was chatting about liberty and safety, and hey, I
mentioned TikTok. Let's talk about that really quickly. Canada has
now banned TikTok from federal government devices. The White House
here in the United States has done the same and
has given federal employees thirty days to wipe TikTok off
any government owned devices. There are a few special cases

(20:15):
where there are exceptions for things like security research or
law enforcement, but for the most part this is a
federal government wide band. Several state governments in the US
have done the same sort of thing. The EU has
started to take action as well for the US and Canada.
The main concern here is that TikTok's parent company, byte Dance,

(20:36):
is a Chinese company, and as such could potentially be
scouring the app for data in an effort to gather
intelligence on behalf of the Chinese government, specifically the Communist Party.
For the EU, it gets a little more complicated because
even if you ignore the connection to China, TikTok itself
is based in the United States, and the EU is
a real stickler when it comes to protecting EU citizen

(20:59):
data from being collected and exploited, and that includes keeping
the information of the government safe, so they don't want
the US to just get access to that. Meanwhile, China's
Foreign Ministry Office issued a statement saying the US quote
has been overstretching the concept of national security and abusing

(21:20):
state power to suppress other countries companies. How unsure of itself,
can the US, the world's top superpower, be to fear
a favorite young person's favorite app to such a degree
end quote. First of all, I don't think that favorite
thing was meant to be repeated, but secondly, shots fired

(21:41):
China's sick burn. Of course, I should also point out
that there are literal laws in China that compel citizens
and companies to act as agents on behalf of gathering
intelligence for the Communist Party, so there's not a healthy
leg to stand on there. Also, China, oddly enough, has
famously blocked tons of apps and services originating in the

(22:05):
West in an effort to prevent their citizens from accessing them.
So again, not exactly taking the high ground on that
front either, but yeah, you use sing us China. Last Pass,
the password vault company, revealed that hackers were able to
access and employees home computer and in the process they

(22:27):
got access to a decrypted vault, a corporate vault, not
a user vault. This is on the corporate side. Now.
You might recall that the same service revealed last year
that hackers had penetrated some customer vaults through other means. Currently,
last Pass says it does not look like this attack
and those previous attacks were connected at all. Whatever the case,

(22:49):
last Pass users should change not only all the passwords
that they had stored in last passes vault, but also
their master password for their last pass account. This is
a worst case scenario, and while Ours Technica points out
that we do not know yet if hackers have access
to individual users vaults and their passwords, you have to

(23:11):
operate under the assumption that they do, and that further,
this data could end up being sold on the dark web,
So you definitely want to get out there and start
changing this stuff now. I've long advocated for password vaults
as they make the worst parts about passwords a little
more user friendly. That is, by using a password vault,
it's easier to create unique, strong passwords for every service

(23:35):
that you access. These passwords are difficult to crack, but
they're also hard to remember, and because they're all unique,
you've got this ton of different passwords that are hard
for you to just keep in your memory. So it
gets to the point where it can be impossible to
remember all of your unique passwords. So a vault's a

(23:57):
great solution unless something like this happens. And while these
security events are rare, we've seen they're not impossible and
that it then falls to us to take action to
make sure we keep our data and our services as
safe as possible. Last Pass is not the only target
to have a catastrophic breach. Another is the United States

(24:17):
Marshals Service, which announced last week that attackers were able
to gain access to secure systems or assumed secure systems
and potentially retrieved sensitive information. The service did say that
the information may include data about subjects who are currently
under investigation, It might include administrative information and also personal

(24:39):
data regarding some of the staff of the agency, among
other things. However, one system that they said was unaffected
was the Witness Security Program, which is more commonly known
here in the US as the Witness Protection Program. This
is the famous program that aims to create new identities
for witnesses who are involved in cases for major crimes,

(25:01):
and this is all in an effort to keep those
witnesses safe from retaliation. It's pretty much a key ingredient
in a ton of movies and TV shows that are
about the mafia. It's frequent that someone gets put into
witness protection so that the mafia is unable to track
them down and target them. According to the agency's representatives,

(25:22):
the hackers were unable to breach that particular database, so
that is some good news. Okay, we're gonna take another
quick break. When we come back, I've got a few
more news stories that we need to talk about. We're back,

(25:44):
all right. So last year, News Corps that's Rupert Murdock's
company that owns multiple newspapers and some other media outlets,
announced that hackers had gained access to corporate systems. We
found out about this February twenty twenty two. However, now
we have a little extra information, and it's that the
hackers had essentially embedded themselves inside News Corps systems for

(26:08):
nearly two years. In a recent letter to at least
some of the company's employees, the corporation revealed, quote, based
on the investigation, news Corp understands that between February twenty
twenty and January twenty twenty two, an unauthorized party gained
access to certain business documents and emails from a limited

(26:29):
number of its personnel's accounts in the affected system, some
of which contained personal information end quote. The letter also
says the newscore doesn't believe the intrusion was focused on
stealing personal data, and that identity theft is likely not
the purpose of this attack, but rather that the intruders
were gathering intelligence. When you look into the information that

(26:50):
the hackers were able to access, it gets pretty gross
for the employees who are affected because it includes not
just stuff like their name, aims, and addresses and birth dates,
but also things like their Social Security number, their driver's
license number, their passport number, that kind of thing. It's
understandable that employees who are affected would be very much

(27:14):
concerned about this, so the company is providing effected employees
the option of experience services to protect against identity theft
and that kind of thing. The identity of the attackers
remains unknown, so it's not really possible to say definitively
what they were up to or how they intend to
use the information they accessed. The leading hypothesis is that

(27:36):
the attackers were aligned with the Chinese government, so this
could be an example of a state sponsored attack, But
from what I've seen, there's nothing that definitively shows that,
or at least nothing that anyone has publicly acknowledge, and
my guess is the investigation is probably ongoing. The website
the Drive has an article that brings up a potential

(27:56):
hazard with autonomous vehicles, then I hadn't really considered before,
which is silly because it's such an obvious use case
that I'm sure most of y'all are way ahead of me.
So this is really an oversight on my part, but
it's at the Ford Motor Company has recently been awarded
a patent regarding vehicle repossessions. So instead of sending Amelio

(28:19):
Estevez to repossess a car after the owner falls behind
on their payments, shout out to anyone who recognizes that reference.
Ford is suggesting that future vehicles that are outfitted with
autonomous operation features would just drive themselves to a location
where a tow truck could meet up with it, or
it would go straight to the repossession agency or maybe

(28:41):
even a junkyard. This would save the people who are
driving tow trucks the potentially dangerous job of going to
an owner's property to repossess a vehicle, So, in other words,
a car would effectively repossess itself. Ford's patent also describes
features for cars that would not necessarily have autonomous capabilities
that Ford would be able to shut down certain options

(29:02):
within the car remotely, some of them not even being optioned,
some of them just being outright features, So things like
power locks or cruise control or air conditioning, or even
disabling the engine itself, rendering the car inert. The patent
describes the process by which an owner would be alerted
in advance, which would give the owner the opportunity to

(29:24):
make good on payments. Otherwise, well, a car might start
to lose all those features, or eventually even drive itself
to the repossession agency, or, like I said, in cases
where repossession would be viewed as being too expensive, like
a bank would say, oh, it does make sense financially
for us to repossess this vehicle, they might just have

(29:45):
the car drive itself to the junkyard, which gets kind
of sinister when you think about it, right, because a
car autonomously driving itself to a junkyard for it to
presumably be junked. That's grim stuff. Pixar could have a
field day with that concept. And now for a horrifying

(30:07):
and infuriating story involving a car company fully embracing a
dystopic philosophy, or rather a third party that works with
a famous car company doing so. A sheriff's office in
Illinois encountered unthinkable resistance from Volkswagen's car Net service while

(30:28):
trying to track down a stolen VW vehicle that had
a two year old boy inside it. So the story goes,
this mom drives home with her two kids and she
pulls up in her driveway, gets out, takes one kid inside,
comes back out to retriever second kid. But meanwhile, a

(30:48):
group of car thieves had driven up behind her vehicle.
They ended up beating up the woman, stealing her car,
running her over, and driving off with her two year
old son in the car. She was able to call
nine one one and report the car and her child

(31:09):
having been taken by these thieves and get medical attention
as well. So anyway, the sheriff calls Karnet because Carnet
is a service that allows Volkswagen really Carnet itself to
remotely track and even control vehicles to an extent. So

(31:29):
the detectives are like, we need to know the location
of this vehicle right away, and the representative from Karnet says, well,
she let that subscription lapse, so I'm going to need
one hundred and fifty dollars reactivation fee before I can
give you that information. A boy's life was hanging in

(31:50):
the balance, and this representative for Carnet is like, can't
give you that info till you cough up the fee.
The detective actually did pay the one hundred fifty dollars
because the detective was aware that a boy's life is
worth more than one hundred and fifty dollars. It is
taking everything in me not to swear during this news item.

(32:14):
It is so unthinkably awful. The detective then, of course,
posted about this incident on Facebook. Volkswagen has responded by
calling it a quote unquote serious breach for its process
of how it works with law enforcement. And again, to
be clear, Karnet is a third party service that partners

(32:36):
with Volkswagen, so ultimately Karnet is responsible for this horrible incident,
but Volkswagen shares some of the blame as well as
for the child. I am happy to report that the
child was found safe. A witness saw thieves pull into
a parking lot. They took the kid out of the car,

(32:57):
then they drove off, and this witness was able to
rescue the kid before he could wander into traffic. The
police subsequently found the woman's Volkswagen. The woman, who was
seriously injured as she tried to rescue her son during
the theft, is currently recovering in the hospital and I
think Carnet has a really long way to go to

(33:17):
atone for this. This was unspeakably inhumane. Finally, on a
brighter side, Competition in Markets Authority or the CMA, this
is an antitrust kind of organization in the UK. This
is one of those organizations that looks to make sure

(33:37):
that the marketplace remains fair and competitive. CMA has said
that third parties indicate consoles could be moving away from
the eight to ten years cycle that we're familiar with, right,
that typically there's around eight to ten years between generations
of consoles, and that we might see them move to

(34:00):
three to four year cycles instead, So every three to
four years you would have a new version of say
Xbox or PlayStation, and that concerns me a little bit
simply because of the economic side of things like it
could be really exciting to people who are thinking, oh,
every three or four years, I'm going to get a

(34:20):
chance to buy a new console with components and better
features and that sort of thing, and that is exciting.
The thing that concerns me, however, is that currently the
way companies like Microsoft and Sony typically market their consoles
is they sell them at cost or sometimes even at

(34:43):
a loss, And the reasoning behind it is that you
go out, you buy your console, and then you end
up spending money on games and services, and that's how
companies like Microsoft and Sony end up seeing a profit
from those sales. It's not from the hardware where they're
taking a loss, but it's from the use of that hardware. Well,
that use is stretched over a decade essentially, or eight years,

(35:07):
that that's a long tail for you to be able
to make your profit off of these pieces of hardware.
If that gets reduced down to three to four years,
then we're probably also looking at a future where these
consoles are going to cost more because they're not going
to be as willing to take a big loss on
the hardware sales because there won't be as much time

(35:29):
to recapture those costs over the lifespan of the consoles.
If people are changing every three to four years, then
they're not necessarily, you know, realizing the profits they would
off a single console generation than they would with a
longer development cycle. So my guess is that such a
future would see consoles being more expensive that at least

(35:53):
you'd be looking at the companies moving away from selling
them at a loss. Maybe they would continue to sell
them at cost, but I would guess they would choose
a way to see better profit margins off the hardware sales,
because otherwise they're leaving money on the table. It doesn't
make sense to me otherwise. This is just my assumption.

(36:14):
I don't know that for sure. Also, this is based
off the CMA citing third party reports. This isn't coming
from Microsoft or Sony. So until we start seeing those
announcements come from the actual companies, we could say that
this is just a rumor, but it's one that makes
me think that if that were to come to pass,

(36:35):
that we would see more expensive consoles in the future.
That's just my feeling, my gut feeling on the matter.
I don't have anything to base that off of, except,
you know, thinking it through, and I could be totally
wrong on this. All right, that's it from the Tech
News for Tuesday, February twenty eight, twenty twenty three. I
hope you are all well. If you have suggestions for

(36:55):
me to talk about stuff what's on this show, well,
I had a couple ways of reaching out to me.
One is to go to Twitter and to tweet at
tech Stuff hsw that's the Twitter handle for the show.
Let me know what you would like to hear, or
you can download the iHeartRadio app. It's bringing to download
free to use. You can navigate over to tech Stuff
by putting it in the search field. Hit that little

(37:17):
microphone button that I'll let you leave a voice message
up to thirty seconds in length, and you can talk
to me goose. Okay, that's enough references to the eighties
in this show, Collie Gee willikers you can tell I'm
getting old all right. I hope you are all well,
and I'll talk to you again really soon. Text Stuff

(37:43):
is an iHeartRadio production. For more podcasts from iHeartRadio, visit
the iHeartRadio app, Apple Podcasts, or wherever you listen to
your favorite shows.

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.