Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tech Stuff, a production from iHeartRadio. Hey there,
and welcome to tech Stuff. I'm your host, Jonathan Strickland.
I'm an executive producer with iHeartRadio. And how the tech
are you. It's time for the tech news for Thursday
May eighteen, twenty twenty three. So first up here in
(00:27):
the United States, the Supreme Court decided not to hear
two cases that otherwise could have forced a decision about
the infamous Section too thirty rules. So, in case you
need a refresher, Section two thirty is part of Title
forty seven of the United States Code. Probably need to
add more context to that. It was introduced in the
(00:50):
Communications Decency Act of nineteen ninety six, that in turn
is part of an even larger act called the Telecommunications
Act of nineteen ninety six. And the whole point of
Section two thirty is that it gives online platforms protection
from liability if users post stuff that's you know, illegal
(01:12):
on those platforms. So, in other words, let's say I
created a YouTube video and my video contained illegal content
in it. Well, YouTube slash, Google, slash alphabet wouldn't be
held legally responsible for what I did because of Section
two thirty it's just the platform. I'm the one who
(01:34):
committed the crime. So this protects the platforms from hosting
and being held legally liable for hosting content that's illegal
for whatever reason. It does get a little more complicated
than that, but that's the basic idea. And there were
a couple of notable cases, big, big emotionally charged cases
(01:57):
that were recently submitted to the Supreme Court for consideration,
and they could have served as a test for Section
two thirty's legitimacy from a constitutional standpoint. But it turns
out that's not happening because the Court found that neither
case had merit for other reasons, like they wouldn't hear
(02:17):
the case, not because of the Section two thirty thing,
but for other reasons. Essentially, the Supreme Court said that
the cases were accusing platforms of violating the Anti Terrorism
Act and that that particular law shouldn't have applied in
the first place. So there was no case there, right, Like,
they can't use that law as the reason to bring
(02:39):
a case against the company because it doesn't apply. So
the Supreme Court said it would not be weighing in
with a decision about Section two thirty because the case
isn't relevant. So you could say the Supreme Court has
sort of punted the decision regarding Section two thirty down
the field, and it will take some other legal matter
(03:00):
in the future that involves Section two thirty to make
the Supreme Court, you know, make an actual decision that
settles the question about whether or not Section two thirty
is constitutional. The US state of Montana has become the
first state in our nation to issue a ban on TikTok.
(03:23):
The ban will not actually take effect until January first,
next year, assuming that the various challenges to this new
law don't end up making the whole matter moot. So
the justification for banning TikTok boils down to a concern
that the Chinese Communist Party is essentially relying on TikTok
(03:44):
to gather intelligence about US citizens and institutions. So the
reason for banning TikTok is to prevent Chinese surveillance. TikTok,
for what it's worth, disputes these accusations and says that
no one from the Chinese Communist Party has access to
data on its US data servers. The American Civil Liberties
(04:05):
Union or ACLU, argues that banning TikTok amounts to violations
of the First Amendment, aka the right to free speech,
due to the fact that folks depend upon the platform
to express their views and to view others. So the
ACLU's argument says, the law is unconstitutional, so it should
just be voided. It should not be put into place
(04:27):
because it violates constitutional rights. From a technical perspective, banning
an online service from a specific state comes with its
own set of challenges. If TikTok is allowed elsewhere, like,
if it's available anywhere other than Montana, how do you
prevent it from crossing state lines? So Montana says it
(04:49):
will find TikTok if the service continues to operate within
Montana's state boundaries, and further, it will also find online
apps like Apple and Google's app stores if they do
not prevent folks within Montana citizens of Montana from downloading
the app. But again this gets tricky. I mean, you
(05:11):
could use a VPN, a virtual private network, which would
make it look as if you're not in Montana. So
you're in Montana, you decide you're going to use this VPN,
and it makes you look like you're in North Dakota
or something. Well, you just bypass that whatever geo fencing
strategy was in place to prevent TikTok from getting to you.
So does that then mean Montana would also have to
(05:33):
consider a ban on VPNs to try and prevent the workaround.
I honestly think this law is a lost cause at
a state level. I just don't think it works. It
doesn't really work on a technical level. It may not
work on a legal level. I'm no legal expert, and
I don't think it works on a social level either.
(05:55):
As for TikTok's potential for harm, I have some thoughts
about that. I mean, it is true that TikTok is
a subsidiary of a Chinese company, Byte Dance, and it's
also true that China has laws that compel citizens and
companies to gather intelligence on behalf of the Chinese Communist Party.
These things are true. But even if you were to
(06:17):
wipe TikTok off the face of the earth, let's say
we just obliterated it from space, because it's the only
way to be sure, the Chinese Communist Party could still
scoop up tons of information about US citizens, because news flash,
our personal data is floating around in various databases all
over the place, like our activities online are constantly being
(06:43):
tracked and not like actively monitored, but certainly notated. There
are records of all this stuff that we do online,
particularly on things like social networks, And there's an industry
that's grown up around the buying and selling of personal information.
And I'm not even talking about clandestine stuff here, like
(07:03):
for the purposes of espionage. The online advertising ecosystem depends
upon this infrastructure of personal data being bought and sold.
So even if we get rid of TikTok, there are
plenty of ways anyone, including the Chinese Communist Party, could
gobble up personal data, because there's so much that's out
(07:24):
there that's just being bought and sold all the time. Anyway,
now that being said, I also recognize that TikTok has
the potential to cause great harm, not by being a
surveillance tool necessarily, but by serving up harmful content to users,
particularly impressionable younger users, and a lot of young people
use TikTok. However, that's also the case with lots of
(07:46):
other social networks and platforms that serve up user generated content.
They also can be potentially really harmful to specific people
particularly impressionable young people. But this really gets more into
how platforms depend heavily on algorithms to serve up content
in an effort to try and keep eyeballs on the
(08:07):
service for as long as possible, right, Like, their whole
goal is to keep you there and serve you ads,
and the longer they keep you there, the more ads
they serve and the more revenue they generate. So to
do that, they design algorithms that are essentially looking for hooks.
They're looking for things that interest you, and if you
(08:28):
indicate that you're interested in something like if you were
to watch a specific type of TikTok video all the
way through, the algorithm says, aha, this is what this
person likes. Let's grab things that are similar to that
and keep serving it to them so that they stay
on here longer. If that thing that you watched happens
to be harmful in some way, like, for instance, let's
(08:51):
say it's promoting something like a behavior that falls into
the category of anorexia, and you happen to have a
vulnerable self image issue, you could end up seeing video
after a video that's reinforcing that particular message, and that
can be harmful I think that's where TikTok poses a
(09:12):
lot of harm. But at the same time, like I said,
it's true across lots of different platforms. It's not unique
to TikTok. It is insidious, it is a problem, but
it's not a TikTok problem. It's bigger than that. So
I guess what I'm really saying is that Montana's law
it seems like it's going to be challenging to enforce,
(09:34):
it might not be constitutionally sound, which means it won't
be around forever anyway. And in the end, the most
tragic thing I think is that it's not actually addressing
the problems that really do exist, whether TikTok does or not.
On a related note, the Federal Trade Commission, or FTC
here in the United States has accused a fertility tracking
(09:56):
app called pre Mom of being real Lucy Goosey sensitive
data and thus violating the FTC's health breach notification rule
in the process. So obviously, if you are using a
fertility tracking app, you are also sharing some very personal
private information with that app. This is data that traditionally
(10:19):
would be trusted to a healthcare provider, and healthcare providers
need to follow very strict rules to keep patient information
secure and private. That's like one of the top concerns
for handling data in the healthcare sector. But the FTC
says that pre mom was sharing personal and private data
(10:42):
with third parties, including advertisers, and that also included data
that would make it trivial to identify a specific user
rather than anonymized data that would keep your identity secret.
And the FTC says that among those entities that it
shared this information with were two Chinese mobile analytics companies
(11:06):
that had previously been flagged as showing quote suspect privacy
practices in the quote according to Connecticut's Attorney general, so
this is an example of what I was just talking about. That,
you know, TikTok does not represent the sole weakness in
protecting US citizens private data, even just from China. The
(11:28):
company Easy Healthcare has copped to the fact that it
has inappropriately shared private information with companies, including two Chinese companies.
It has agreed to pay a one hundred thousand dollars
civil penalty and has marked another one hundred grand to
be split between the states of Connecticut and Oregon, as
(11:48):
well as Washington. D C also PRIMAM will no longer
be allowed to share personal data with third parties, and
it also has to ask third parties with whom they
had previously shared in information to delete that information. However,
there's no legal requirement for those third parties to do that.
There's no like enforcement teeth that will make them have
(12:11):
to go and delete that information. So whether that happens
or not remains to be seen, but I remain skeptical. Okay,
we've got a lot more stories to go over before
we get to those, let's take a quick break. We're back.
(12:35):
So the British broadband and mobile provider BT Group has
announced that over the next several years from now to
twenty thirty, essentially the company plans to cut around fifty
five thousand jobs that would represent more than forty percent
(12:55):
of its current workforce. Now this isn't just some sort
of disastrous news. BT Group. Over the last several years
has been working to roll out internet fiber infrastructure as
well as five G deployment, and throughout that process they've
had to bring on lots of extra hands to get
(13:17):
stuff done, including a lot of contractors. So part of
this is just that once it actually has finished that
project of creating this infrastructure and deploying five G it's
not going to have the need for all those people
who are currently making that happen. For another matter, BT
(13:37):
Group anticipates that AI and automation are going to end
up handling a lot of the tasks that are behind
the scenes, and that will mean there will need to
be fewer actual human beings doing that stuff. Teams themselves
on BT Group side will need to be smaller or
they won't need to be as large, if you want
to look at it that way. So the company has
(13:59):
our already met with the Communication Workers Union or CWU
to kind of talk this over right, because obviously, with
unions and everything, a company can't just willy nilly make
decisions to cut tens of thousands of people over the
next several years. And the CWU says that BT Group
really needs to focus on contractors first, and you know,
(14:22):
the people who had been hired specifically for things like
building out this infrastructure and to sunset those positions first.
That's where the priority should be before you start to
even touch company staff. On the one hand, the story
does feel like it's starting to lean a little bit
into the fear and uncertainty and doubt about AI taking
(14:43):
jobs away. There is an element of that here, But
on the other hand, it also stresses how it is
important for companies to try and be efficient and to
avoid the trap of creating a workforce that's too large
to support the actual amount of work to be done.
You know, we've heard again and again in the States
that tech companies had come to a conclusion that leaders
(15:06):
in tech companies, I should say, came to the conclusion
that they were overinflated in their workforce. They had too
many people and not enough work to go around. I'm
sure that was true in some cases, and it doesn't
do anyone any favors for a company to just kind
of act as a holding facility for adults when they
(15:27):
don't have anything to do. They're not being productive, they're
not adding anything to the company, they're not adding anything
to society. Their time could be better spent doing something else.
Although it might be comforting to know that you've got
a steady paycheck even if you don't have steady work.
So yeah, there's both sides to that, and I can
(15:48):
see both sides in that particular story. Over in Italy,
the government has actually set aside thirty million euros to
create programs to help people improve their work skills specifically,
and an effort to smooth the transition to a future
where certain types of work are more likely to be
automated or handed over to AI. So the goal is
(16:09):
to identify sectors that are most likely to be impacted
by automation and to prepare people who are currently employed
in those jobs to learn marketable skills so that they
can then change career paths to something that's more sustainable.
I think that's great. I think it recognizes that if
you have an unskilled workforce that's harmful to everybody. It's
(16:31):
not just the workers themselves, although clearly it's a hardship
on them, because if you're suddenly out of work and
you don't have marketable skills, it becomes very difficult, right,
But beyond that, it is hard and bad for society
at large for that to happen. It becomes an impediment
to the whole, not just to the person. So it
(16:51):
makes sense to build in these systems to try and
help people prepare for the future so that you have
a animal impact on both the individuals and the country
as a whole. I do think it's going to require
way more than thirty million euros to adequately prepare people
for how AI and automation are going to disrupt multiple industries.
(17:14):
But it's a start, and that's something like that should
be applauded that there's actually effort being done to work
on this now for some really fun stuff. So YouTube
participated in essentially what amounted to some upfronts recently and
made a few announcements that are sure to irritate certain users. So,
(17:38):
first off, upfronts are a type of industry event. If
you've never heard the phrase upfront before, here's what it is.
It's a kind of event where a platform that carries
some form of content and thus advertises against that content
gets up front of actual advertising companies, or rather up
(17:59):
in front of them. So it's pretty typical for these platforms,
which can include everything from a streaming service to a
cable television network, to trot out some talent. It becomes
kind of a dog and pony show. They'll promote upcoming
content and it's all in an effort to get advertisers
excited and to attract that sweet, sweet cash. Well, YouTube
(18:24):
held its own event that was pretty much an upfront,
and during that event, announced that one change coming to
its service is that for people who watch YouTube on
a television, they may soon encounter ads that are thirty
seconds long, and they might get a single thirty second
long ad rather than two fifteen second ads, and these
(18:47):
will be unskippable ads when they start a video, so
you get a full thirty six second commercial before you
can start watching whatever it is you're watching on YouTube
on your television. Further, YouTube is going to tell a
feature that will show ads to people who pause a video.
So you've got a video going, you need to pause
it for a bit, and then an ad begins to
(19:09):
play while the video itself is on pause. Now, the
example that YouTube showed is not quite as obnoxious as
what I first imagined. Like to me, it sounded like
the frame of the YouTube video would suddenly be taken
over by a commercial. No, apparently, it's more like in
a banner that appears to the side of the video,
(19:29):
and you might have a video ad playing out in
that banner, but it doesn't replace whatever it was you
were watching, so it's not quite that bad. Plus, they
showed a dismiss button beneath the ad itself, which at
least indicates that you could quickly click on dismiss so
that that little automated ad stops playing. So not as
(19:52):
bad as I first imagined based upon the description, I
guess for you know, it's not the worst thing in
the world. It's for people who are watching YouTube on
television who are not part of YouTube Premium. I'm sure
this will be frustrating to them. For people who are
on YouTube Premium, you know you're paying a subscription fee,
(20:13):
you don't get ads for you at all, So y'all
are good. A professor at Texas A and M University
reportedly gave several students an X grade, which indicates an incomplete.
It's not a fail, but it is an incomplete. And
this included students who were at senior level who otherwise
(20:33):
would have graduated but then were denied diplomas because they
had a course where they had an incomplete. So why
did the professor give incompletes to these students. Well, allegedly
what happened is the professor assigned several essays and then
took essays that were submitted by students and fed the
(20:54):
essays into what he referred to as chat GTP. Of
course he meant chat GPT, not GTP, but that mistake
is easy to make. I'm not gonna give him too
much grief for that. I will say the Rolling Stones article,
of the Rolling Stone article, it's not the band, the magazine.
The Rolling Stone article was way more snarky about this
(21:18):
than I will be. But anyway, he was asking chat
gpt if it had been responsible for part or all
of the various essays, and apparently chat GPT said it
was responsible for at least some part of these essays.
So boom, students get an incomplete because it appears that
(21:39):
the work they submitted was not their own. The problem
with this is that chat GPT doesn't really work that way.
You can submit material to chat GPT that it definitively
did not create and then ask it if it created
the material, and it might say it did, or it
(21:59):
might say it could have, which isn't quite the same thing,
but still, you know, raises doubt. People have actually shown
this off by using passages from classic novels, and chat
GPT just confidently says maybe it actually wrote that, so
you could say, wow, according to chat GPT, it created
great expectations or bride and prejudice, which would be quite
(22:22):
a trick for chat g t two have done that.
So the students are understandably upset that they got an
incomplete and were accused of plagiarism, essentially of foisting their
work onto an AI chatbot, and they did no such thing,
and the professor did not realize that chat GPT can't
be relied upon to indicate whether or not it generated
(22:45):
a particular work. Heck, some folks went so far as
to dig up the professor's own doctoral thesis when he
was a graduate student and submitted passages to chat GPT
and asked if it wrote the professor's and chat GPT
essentially said, huh, yeah, I might have written this. Now.
(23:05):
As I said, Rolling Stone has a pretty snarky article
that throws massive shade of the professor for this, and
I get it. Holding up a person's diploma through the
misapplication of technology is a big deal. But on the
flip side, and at least a little bit in the
professor's defense, the discussion around chat GPT and education has
(23:26):
been so dramatic and so disruptive over the past several
months that I think it's it's natural for educators to
be concerned that students are passing off AI generated work
as their own work. That is an understandable concern. The
problem is you can't trust the robots to claim authorship
(23:48):
because those rotten watsits will say they wrote stuff what
was published one hundred years ago, and clearly that's not
the case. So yeah, kind of an absurdly comical situation
here if it weren't for the fact that it also
means a bunch of students were denied the chance to
graduate with a diploma, at least temporarily because of this
(24:11):
incomplete There there are people working towards trying to get
all this resolved, but as I recorded this, I didn't
have an update to give about where we are in
that process. All right, Hey, so you know how AI
large language models are trained by analyzing tons of data
(24:31):
through various sources like chat. GPT is built on top
of a model that crawled through millions and millions of
web based documents. Well, what if you did that same thing,
but instead of using the web, you turned to the
content on the dark Web as your training material. Of course,
(24:52):
the dark Web is inaccessible through normal links on the
World Wide Web. You typically get to the dark web
by using special types of browsers that allow you to
access these sorts of things, and you can encounter all
sorts of stuff, like you know, hacker communities that post
(25:14):
malware so that you can take it and tweak it
and deploy it. You know, obviously stolen information is bought
and sold on the dark web. Well, I would say,
don't do that. Don't train AI on dark web material,
not because I think it's going to create dangerous AI,
but because someone already beats you to it. Some researchers
(25:37):
in South Korea introduced an AI system that they call
dark BURT, and they trained it on information exclusively from
the dark web. So BERT in this case actually stands
for bi directional Encoder Representations from transformers, and it was
originally created by Google back in twenty eighteen. BURT, that is,
(26:01):
was created by Google, and then Meta researchers took BERT
and they continued to evolve it. They began to tweak it,
change it a little bit, and they turned it into
a new AI system called ROBERTA cute right by the way,
This was back when Meta was still just Facebook, but
of course today it's Meta. Roberta then provided the foundation
(26:25):
for these South Korean researchers. They took that framework, but
they trained it on information on the dark web, and
thus we get dark Bert. Apparently, they say it worked
really well, like surprisingly well, and that tools built on
top of this aimodel perform at least as well, if
(26:46):
not better than other AI tools. So for example, if
you were to create an AI chatbot based on this model,
it might end up being as impressive as chat GPT.
That being said, they are not going to unleash dark
Burt on the general public. They're going to keep it
(27:07):
under wraps. They are going to allow academic researchers access
to it, so there can be academic applications to try
the tool out, test it out, and to develop different techniques.
It may be used also in ways to get a
better understanding of how the dark web works from kind
(27:28):
of a architectural approach as that, which that could be
really useful and everything from cybersecurity to government investigations. So
it is important. But I just wanted y'all to know
that it's not like an even more evil version of
chad GPT is going to be running on the loose
(27:49):
out there. Okay, We're going to take another quick break
when we come back. I've got a few final stories
to cover for this week, all right. So, the Pew
Research Center, which has done lots and lots of surveys
(28:13):
about various things connected to technology in general and the
Internet in particular, recently held a survey that found six
out of ten respondents, so sixty percent of their respondents,
indicated that they've taken a bit of a break from
Twitter since Elon Musk took over. I don't know how
(28:34):
many people were actually involved in this survey, I just
know that around sixty percent of them said that this
was the case. So forty percent said they had not
taken a break, sixty percent said they had, and those
breaks ranged from like a couple of weeks to essentially
leaving the platform since Musk took over. People of color
(28:56):
were more likely to say that they had taken a
break than white users were, But interestingly, other major factors
didn't seem to show as much of a difference. So,
for example, there was very little difference found between people
who leaned conservative versus people who leaned liberal. That both
(29:20):
conservatives and liberals indicated that about sixty percent of them
had taken a break from Twitter recently, and things got
a little more complicated when you started to break it
down by gender, also within political leaning. But I'm not
going to dive into all of that because it would
take up too much time, and honestly, I don't know
(29:41):
what conclusions you could draw from it either, other than
one really big takeaway, which is if that survey is
reflective of a larger trend, which is a big if
you never know, Like if the survey size was really small,
then you can't really make any big predictions based on
off that. But if it's a representative survey and if
(30:04):
the findings are true, it could mean that Twitter's new
CEO is going to have a lot of challenging work
ahead of her to pull Twitter out of the doldrums,
because it's not enough to tell advertisers we value you
and we want you back on the platform, because you know,
famously Twitter has lost a lot of ad revenue since
(30:25):
Musk took over. They also have to show that their
platform is a place where users want to be. And
if advertisers are seeing reports that sixty percent of Twitter
users are kind of jumping ship, that's not a great
selling point for them to come back to the platform
because they're just not you know, the people aren't there anymore,
(30:45):
so why would you spend money to advertise there. So yeah,
I think that this is bad news for Twitter overall
if in fact the survey is delivering dependable information, and
again that's a big if. It would need I think
for their investigation to make sure that that's actually what's happening. Hey,
(31:05):
do you remember Elizabeth Holmes. She's the disgraced founder of
the medical tech company Farrhanose. So if you don't remember her,
here's a very quick overview of who she is and
what she did. So Holmes dropped out of Stanford and
went on to found a company whose aim was to
create a medical device capable of testing a micro drop
(31:29):
of blood for more than one hundred different medical conditions
and diseases. So with a teeny tiny pinprick, you would,
in theory, be able to submit that sample to this device,
which theoretically would be small enough to be like a
desktop printer, and run banks of tests on it to
determine if you are at risk for any particular medical conditions.
(31:54):
And Theranos received a lot of positive press in the
early days, Like they were talking about it as the
democratization of medicine and making medicine and proactive health care
far more accessible and democratized. And there were heavy hitter
investors who poured hundreds of millions of dollars into the
(32:16):
fledgling startup. But then a few years later an expose
revealed that things were pretty shady behind the scenes at Theranose.
The expose claimed that advancements in Theranose technology did not
stand up to scrutiny, that the company was making claims
it could not back up, that it was outright misleading
(32:36):
investors regarding technological process and relying on various competitor technologies
to make it look like thereonos tech was working. So
charges of fraud and other things were brought against Homes
and some other folks at Thernose, and Holmes was ultimately
found guilty of at least some of those charges, and
(33:00):
she was headed to prison when she decided to petition
the court to ask to allow her to remain free
while she challenges the conviction. She's attempting to have her
conviction overturned, and she said in the process she would
very much like to not be in prison, please, And
the judge said nah, naw, you're going to the pokey.
(33:23):
So now Holmes appears likely to be headed to prison
in a couple of weeks. And on top of that,
the judge has also levied a four hundred and fifty
two million dollar judgment in restitutions that Holmes is supposed
to pay the various victims of her crimes. And when
I say victims, keep in mind, I'm mostly talking about
(33:43):
really rich people who put money into her company. I'm
not talking about necessarily the folks who are depending upon
Therahnose to deliver reliable medical information so that they could
make the right decisions regarding their health care. No, so
those aren't the victims that the court's particularly concerned about.
(34:04):
They're concerned more about, you know, Rupert Murdoch, who's obviously
really hurting for cash. Anyway, the moral of this story
should be that the Silicon Valley mantra move fast and
break things, doesn't you know, apply to breaking the law,
at least not when it means that rich people lose money, because,
like I've said many times before, they hate that. Finally,
(34:27):
Bloomberg's Mark German reports that Apple had to make lots
and lots of concessions while designing the upcoming mixed reality
headset that we expect to see unveiled at some point
this year, possibly at the Worldwide Developer Conference or WWDC
in June. Now, the fact that Apple made concessions is
(34:47):
not a surprise. We have heard about this before. I
think everyone has heard that. The initial hope was Apple
was going to produce an augmented reality headset that would
appear indistinguishable from a stylish pair of eyeglasses. However, the
technical requirements that were needed to achieve the desired performance
(35:08):
meant it just wasn't plausible for a company to get
both that and the form factor in one package. You
could either have a stylish pair of eyeglasses that had
very limited utility, or you could have a more useful device,
but it is definitely not going to fit into a
small form factor. So the heads that we're getting, which
(35:31):
is reportedly called Apple Reality, will feature a screen that
will feed live video from external cameras to the viewer.
So it's kind of like if you're holding your smartphone
up to your eyes and you've activated your smartphones backward
facing camera and you're just looking at a live feed
around you. That's essentially what this is doing. When it's
(35:51):
working in augmented reality mode, it'll also be able to
do virtual reality applications. It will connect to a separate
battery path, so there'll be some sort of a cable,
I guess attaching the headset to a battery pack that
you would wear somewhere else, maybe like on a belt
or in a pocket or something. And reportedly that so
(36:14):
that it can take some of the weight off the
headset itself, so that it's a little more comfortable to wear.
You're not wearing both a screen and a battery pack.
And also it'll give you a little more juice so
that you could actually use the darn thing for more
than like half an hour. Right, So, Apples had to
make lots of compromises in its quest to build this gadget,
(36:37):
and I have a feeling that the company is really
hoping that it becomes similar to the iPhone, right because
the iPhone was not the first smartphone on the market.
It was the first smartphone to get massive consumer interest
and demand. That's what really set the iPhone apart. Now
that Apple was first, but that Apple was able to
(36:59):
refine the approach to that gadget and get the general
public really excited about it. I think they're hoping for
the same thing with this mixed reality headset, because, as
we've seen, lots of other companies have introduced mixed reality gadgets,
but Apple is hoping to kind of define that market
(37:20):
and not, you know, be the innovator, but the best
in class. We've also heard that the price of this
particular technology is likely to be somewhere in the neighborhood
of three thousand dollars, which yikes, that's super expensive. I
think even hardcore Apple fans might hesitate before dropping three
(37:40):
g's on strapping a screen to their face, But then
I've been wrong about them before, so who knows. Okay,
that's it for the tech news for today, Thursday May eighteenth,
twenty twenty three. I hope you are all well, and
I'll talk to you again really soon. Tech Stuff is
(38:07):
an iHeartRadio production. For more podcasts from iHeartRadio, visit the
iHeartRadio app, Apple Podcasts, or wherever you listen to your
favorite shows.