All Episodes

September 13, 2024 40 mins

We've got a lot of stories about artificial intelligence to talk about this week. Plus, Xbox holds more layoffs, Sony announces a new PS5 model, and for the first time, private citizens go on a space walk. Plus much more! 

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to Tech Stuff, a production from iHeartRadio. Hey therein
Welcome to Tech Stuff. I'm your host, Jonathan Strickland. I'm
an executive producer with iHeart Podcasts and How the tech
are you. It's time for the tech news for the
week ending on Friday, September thirteenth, twenty twenty four. Happy

(00:29):
Friday the thirteenth, everybody. I'm wishing all of you the
best of luck and hopefully a notable absence of a hulking, undead,
hockey mask wearing psychopath. We start off this episode with
some stories about AI, as is our wont to do so.
First up, Kylie Robinson of The Verge Robison, perhaps my apologies, Kylie.

(00:52):
Kylie of the Verge has an article titled open AI
releases OH one, its first model with reasoning abilities, which
sounds pretty significant, and Kylie explains that one is the
first of several planned sophisticated AI models that should be
capable of answering complex queries. While in development, this was

(01:14):
known to the AI crowd as the Strawberry Model. That's
actually the first I've heard of that code name. But
then again, I'm not exactly brought in to hear about
all things AI. The model is said to be able
to tackle multi step problems, which is something that earlier
AI models have had troubled doing, and doing this comes

(01:37):
at a couple of costs. There's the cost of time,
because it does take a little longer for this model
to create an answer, although it typically is still faster
than humans would be able to do it. But then
there's also the financial cost. These models aren't free to use,
and they come with a fairly hefty fee for access.

(01:57):
That fee is like three to four times more expensive
than using the current AI models for other purposes. Now,
according to open ai representatives, the one model is less
prone to hallucinations than earlier models are, but it is
still not immune to them. So the challenge of creating
an AI model that you know is dependable and accountable

(02:21):
without having to worry that it's just making stuff up, Well,
that problem persists. It's also apparently pretty good at coding.
It can't outperform the best of the best of human coders,
at least not yet, but I feel like if open
ai can convince companies that you know, their AI models
are equivalent to exceptional coders. Maybe not the best, but

(02:44):
really good coders. That's a pretty big sales bitch. Coders
who don't have salaries, they don't have healthcare plans, they
don't have stock options or retirement accounts or anything like that.
You know. I know that AI ideally is meant to
augment humans so that they can do their jobs better

(03:04):
and they can offload the really tedious work to automated processes.
But I still have concerns that, at least in the
short term, there are going to be business leaders out
there who will view AI as kind of a shortcut
to downsizing staff and outsourcing work to the robots. Marie
Baran of Newsweek wrote an article this week titled AI

(03:25):
generated junk science is flooding Google scholar studied claims, and
assuming that the study is correct, it illustrates one of
the many concerns I have about generative AI. Essentially, the
study found that AI generated science papers, fake science papers
in other words, are showing up in Google scholar right
next to legitimate, you know, scholarly articles within search results.

(03:50):
So Google has been indexing pages that are made by
AI bots, you know, like chat GPT. The Harvard Kennedy
School Misinformation Review published this study and identified one hundred
and thirty nine papers that are likely generated by AI,
and more than half of those identified papers focused on

(04:11):
topics that relate closely to stuff like climate change, health concerns,
and other policy relevant subject matter. And the danger here
is obvious. The papers may appear to be from legitimate
researchers who followed rigorous scientific practices in order to you know,
draw their conclusions, but in reality, they could just be
propaganda that's pushing a specific point of view and disguised

(04:35):
as a legitimate scientific paper and considering a lot of
people quote unquote research by just looking for passages and
papers that appear to confirm what the people already believe,
you know, just cherry picking. In other words, this could
lead to situations in which folks are citing fraudulent papers
because those papers support specious arguments. Now y'all already know

(04:57):
that I call out for critical thinking regular on this show,
and this trend really drives home how important critical thinking is.
I am just as guilty as anyone else of grabbing
onto a source that appears to confirm my biases. I
actually have to remind myself to take a closer look
to one make sure that the source is legitimate, and

(05:20):
then make double sure that what it's actually saying is
what I think it says, because that's not always the case.
I have been known to misinterpret stuff like that has happened,
so I have to be careful about this too, and
I don't always succeed, but it's good to keep it
in mind, like this is something to strive for, use
critical thinking, and it sounds to me like this is

(05:41):
just going to get more challenging to do as time
goes on. One thing that could help is if Google
developed some tools to make Google scholar more reliable. You know,
knowing that these things are going to exist, how can
Google scholar better differentiate between legitimate papers and those that
were generated by AI. That might involve improving transparency regarding

(06:04):
how Google Scholar indexes and ranks scholarly results in the
first place, or giving more tools so you can filter
stuff out, like make sure that the articles you get
are from legitimate peer reviewed sources. I suppose I should
also throw in this potential pitfall. If future generations of
AI models train themselves on fake scientific papers that were

(06:27):
created by earlier AI models, then the quality of content
on the Internet will further decline reasoning AI models like
the aforementioned one would be more likely to create incorrect
solutions if the training material it used included this sort
of stuff, these fraudulent scientific papers, because AI can't just
magically know what information is relevant and dependable and which

(06:53):
one is just invented in order to push forward a narrative.
Emma Roth that The Verge has an article that it's
titled Google is using AI to make fake podcasts from
your notes, and it kind of makes me think of
a recent episode I did on tech Stuff. I titled
it this episode was written by AI sort of, So

(07:15):
in that episode, I use chat GPT to create a
tech podcast episode not just not of tech stuff, just
a tech podcast, and it was supposed to trace the
history and the technology of airbags. And I read the
entire generated episode out in that podcast episode of tech Stuff.
I then spent the rest of the episode fact checking

(07:36):
and critiquing the AI's work. Now, the most disturbing thing
I encountered with this experiment was that the AI kept
inventing fake experts to deliver various bits of information. Sometimes
that information was wrong, and all the experts didn't really
exist anyway. In Roth's article, it unfolds that Google created

(07:57):
a feature in its notebook LM app that will take
notes that you have written down. It will then generate
this you know, AI created podcast hosted by a couple
of AI bots posing as the hosts, and the podcast
has hosts having a discussion about whatever the research topic is,
and it uses your notes to create a conversation between

(08:19):
these two bot hosts, and it kind of riffs off
the information that you have gathered. It sounds like the
AI is careful to only draw information from your notes,
so the output you get should reflect the input that
the bots relied upon. So in other words, if something
is wrong in the episode, it would be because your

(08:41):
notes have wrong information in them or incomplete information. The
AI wouldn't necessarily be hallucinating or confabulating, you know, drawing
from some source you've never seen either, And that's a
good thing. It does remind me. In the case of
the AI episode that I generated, Chad gbts was actually
unable to share with me what the sources were that

(09:04):
it was pulling information from. I asked it to, and
it couldn't. Instead, it gave me a list of sources
that the information might have come from, but there was
no guarantee that any of the information used in the
episode actually came from those sources. Now, in this case,
with Google, the sources you or at least the notes
that you've taken. And I think this approach is interesting.

(09:27):
It doesn't strike me quite as off putting as what
I experienced with chat GPT for one thing. To me,
this feels more like a study tool. I mean, we
all know that people have different learning styles, right, so
I think this tool could potentially be good for someone
who does take meticulous notes, but that doesn't really help
them absorb the information, right they don't really understand they've

(09:51):
got the notes, but it hasn't kind of sunk in.
So I think this kind of approach could create a
way of synthesizing and contect rualizing the information that could
be more impactful depending upon the subject matter and you know,
the learning style of the person involved. So as a
studying tool, I think it's a pretty neat idea. Now,

(10:14):
I also wonder if this will ultimately lead to people
using this tool to create podcasts that are hosted by AI,
which could be a problem. I mean, especially considering that
it's going to be based on whatever notes were made
to create the podcast, so you could do it to
make them say whatever you wanted, or you know, not

(10:34):
make them say, but they would say things drawn from
your own perspective in the notes you created. Now, that
could be funny if you were to create really weird
notes about obviously fake stuff, not in an effort to
mislead listeners, but rather as a way to entertain them.
And I'm thinking about shows that are something along the
lines of existing fictional podcasts out there, stuff like Welcome

(10:59):
to night Vale or Old Gods of Appalachia or my
friend Shay's podcast Kadi Womple with the Shadow People. And yes,
that last one is real. Kadi Womple with the Shadow
People is a real podcast, and yes I am plugging
my friends podcast sort of. So if Southern gothic fantasy
with a healthy dose of feminism is your kick, you
should check it out. In fact, you should just check

(11:19):
it out anyway, give an episode a listen, because Shay
is a great storyteller, and you know, maybe it's your jam,
maybe it's not, but yeah, I could see this tool
being used for that kind of thing. That arguably that
does bring into question the artistry although you would still
need to put in the work to create the source
notes that the hosts are drawing from. So it's a

(11:42):
gray area for me. Like generally, I tend to be
pretty negative or pretty critical at least about generative AI,
But if it comes down to something like this, where
you have done an enormous amount of work to build
the source material, I'm not quite as adamant the generative
AI is a bad tool to use in this context.

(12:06):
But maybe I just need to think on it more.
Now back to artificial intelligence in general. So on Science Alert,
David Neil has an article that's titled AI chatbots have
a political bias that could unknowingly influence society. Now, I
don't think this should come as a surprise because bias
has been a big issue in AI for decades. Some

(12:28):
experts have argued that not all bias is bad, right, Like,
you might build an AI model that is quote unquote
biased to pick out instances of say, medical images that
could indicate a health hazard, like the presence of say
a tumor, for example. But unintended bias is bad, and

(12:48):
we've seen lots of examples of that with AI, like
with facial recognition, technologies and the like. Neiled cites a
computer scientist named David Rosato from Otago Polytechnic and New
Zealand who uses various political questionnaires to test different AI
chat models to see where they fall on the political
spectrum based upon their responses to these questionnaires, and according

(13:11):
to his results, the models all fell somewhere left of
center on political matters, and they tended toward a more
libertarian point of view rather than an authoritarian point of view.
None of the models were coming out as like hard
left evangelists or anything like that, but the bias was
present and it was significant. Rosatto doesn't believe that the

(13:34):
bias was intentional, but rather it's sort of an emergent quality.
And why is it emerging at all? Well, the best
guess is that the material used to train the AI
models skews left more than it does right. Not all
of it, but that overall, when taken as in it,
you know, a hole, it skews more left. That there

(13:55):
are more pieces written from a left of center perspective
than right of center. This in turn imbalances the material,
so that leads to a bias in the models. And
it kind of makes me think about how matter and
antimatter are. So when matter and antimatter come into contact
with one another, they annihilate each other. So if there

(14:16):
had been a perfect balance of matter and antimatter at
the dawn of time, there would be no universe to
speak up, because it would have all blowed up before
it could even get started. But for some reason, there
was a little bit more matter than there was antimatter,
and we got the universe. So with these AI models,

(14:37):
the training material had more left of center perspective material
than otherwise, which y'all you know, I lean left so hard.
I walk around at a forty five degree angle. But
I don't think having a biased perspective in the tools
themselves that are meant to provide and contextualize information is

(14:58):
a good thing, even though that bias kind of leans
toward the way that I view the world. I don't
think a bias is good in that respect. It needs
to be as objective as it possibly can, in my opinion,
So if the bias means we can't rely on the
results provided, that ends up being a big problem, especially
considering how gung ho everybody is on AI. Now. Despite

(15:22):
these findings, we have also seen examples of generative AI
engaged in recreating some really ugly biases as well. I'm
thinking primarily of image generating models that tend to be
guilty of perpetuating racial stereotypes. So I guess you could
see this issue as a wake up call regarding our
own prejudices and biases on top of the issue we

(15:42):
have with AI. Okay, we've got a ton more news
to get through. Let's take a quick break to thank
our sponsors. We're back and we're not done with AI
just yet. Just Weather Bed of the Verge has a

(16:03):
beast titled Meta fed its AI on almost everything you've
posted publicly since two thousand and seven. And yeah, that
article starts off with a whammy. In fact, I'm just
going to quote whether Bed at the beginning of the article.
Who writes quote Meta has acknowledged that all text and
photos that adult Facebook and Instagram users have publicly published

(16:25):
since two thousand and seven have been fed into its
artificial intelligence models. End quote. Now this is significant for
many reasons, one of which is that Meta executives had
previously sort of denied that this was the case when
asked by Australian legislators if user data was being exploited
in this way, but ultimately they did cop to the

(16:46):
practice when lawmakers really cornered them with pretty direct questions
that they couldn't just deflect. So essentially, unless users had
set their content to something other than public, you know,
like friends only or private or whatever, then that content
was up for grabs and Meta grabbed it for the
purposes of training AI. Meta didn't go so far as

(17:08):
to explain if there's a cutoff for when the data
scraping happened. So, for example, assuming that it does go
all the way back to two thousand and seven, can
the bots scrape everything that was ever posted to the platform,
at least publicly? And could that be the case even
if the person who posted that stuff was a minor
at the time, so all of those posts, including images,

(17:31):
could be up for grabs. Now, Meta has said it
does not scrape profiles of users who are under the
age of eighteen. Fine, but what about users who today
are adults but have been on Facebook long enough so
that the earliest days of their Facebook use was when
they were under the age of eighteen? Did the data

(17:51):
scraping include their data from that time? I mean, yeah,
today they're adults. But when they posted those things back
in say, two thousand and seven, they well, that question
is more murky, and to be truthful, the Meta representatives
didn't really have an answer for it, and that's very concerning. Now.
Meta does allow users to opt out of this data

(18:11):
scraping practice if those users happen to live in the
European Union, where regional laws mandate that Meta create this
option to opt out. Likewise, laws in Brazil required Meta
to cut out the data scraping for AI there as well,
But everywhere else in the world, it's fair game, baby.
If it ain't expressly against the law, Meta is doing it,

(18:34):
which might be food for thought for all the other
countries out there, at least the ones with any interest
at all regarding protecting citizen data from massive corporations. Metta
is also in the news here in the United States,
as Republican Congressman Tim Wahlberg has some pretty harsh words
for the company after it responded to concerns over how
it has hosted ads for illegal drugs, which I talked

(18:57):
about in an earlier Tech Stuff episode, so to kind
of summarize, Legislators had sent an inquiry to Meta following
reports from the Tech Transparency Project as well as the
Wall Street Journal that detailed how advertisements for illegal drugs
were appearing on Facebook and Instagram, both prescription drugs and

(19:17):
recreational drugs, like log on and suddenly there's an ad
for cocaine on your Facebook feed. And this means that
Meta was not only providing a platform that these illegal
ads got to use, but Meta itself was profiting from
these illegal advertisements. Now, Meta didn't create the ads, They're
just hosting them, but they are profiting from them because

(19:40):
the advertisers have to pay Meta to have this space. Right,
So the lawmakers sent more than a dozen questions to
Meta to really get down to how big an issue
this is, how prevalent is this problem, and what the
heck is Meta doing about it? And Meta essentially responded
by saying, and to be clear, I am paraphrasing like

(20:01):
crazy here, but they said essentially like, yeah, you know,
that's crazy. We agree that's crazy. But Meta is all
about doing its part to fight illegal activity. This is
a big issue beyond any one platform. This is this
is major. This isn't just us, This is this is
a big problem. Now, Wahlberg was not buying this and
called the response unacceptable. I agree with him. He went

(20:24):
on to say, quote Metta's response not only ignores most
of the questions posed in our letter, but also refuses
to acknowledge that these illicit drug ads were approved and
monetized by Meta and allowed to run on their platforms
end quote. The director of Tech Transparency Project, Katie Paul,
also accused Meta of deliberately sidestepping questions of accountability in

(20:47):
an effort to deflect and to claim that this is
just a bigger issue, much bigger than Meta and its platforms.
And CEO Mark Zuckerberg recently said on a podcast interview
that he thinks in the past he has made the
mistake of accepting responsibility for stuff that subsequently he believes
his company wasn't actually guilty of doing. Like he said,

(21:07):
we got to stop saying we're sorry for stuff that
isn't our fault. Essentially is how it came across to
me anyway. But this particular subject seems pretty cut and
dried to me, because either Meta was selling ads to
clients who ultimately made those ads about illegal drugs, or
Meta did not do that. So either Meta profited off

(21:28):
of illegal advertisements or it didn't. There's not grey area here,
you know. And sure Meta could argue that the scale
of its business as such that it can't police every
ad or ensure that an ad that was sold actually
ends up being for whatever it was sold to be.
But shouldn't they because advertising is their business, that's what

(21:52):
Meta does. It's where the vast majority of the company's
revenue comes from. So it seems to me that the
company absolutely should prioritize that it ensures that its core
business is legal. Maybe I'm being unreasonable here. I don't
think so, but maybe. David Shepherdson of Reuter's has a
piece titled TikTok faces crucial court hearing that could decide

(22:16):
fate in the US. So you might remember that lawmakers
here in the United States decided that TikTok would have
to divorce itself from its parent company, Byteedance, which is
headquartered in China, if it is to be allowed to
continue to operate in the United States, and subsequently, TikTok
has argued that such a separation is technologically impossible. Now

(22:38):
that's a claim I personally find hard to believe, though
I do think any separation would require huge changes to
how TikTok operates. It also said it's legally impossible and
financially impossible. Well, next week, the US Court of Appeals
for the District of Columbia will hear arguments about that
legal side from TikTok's legal team, and they're saying that

(22:58):
this law is unconstantitutional that violates the First Amendment, which
is also known as the freedom of speech. Well, no
matter what the outcome is of this particular case, I
think chances are pretty good it's going to get pushed
upward to the Supreme Court. Both the US Department of
Justice and TikTok's lawyers have asked the Court of Appeals
to render a decision by no later than December sixth,

(23:20):
and that will be just a little over a month
before the nationwide ban is to go into effect, which
is on January nineteenth. That's assuming that, you know, there's
no challenge to this, and it provides very little time
for the Supreme Court to get involved to make a
decision as to what might happen once it gets to
the Supreme Court. That beats me, because I mean Donald
Trump appointed three Supreme Court justices during his presidency and

(23:45):
At that time, Trump was actively trying to ban TikTok
by executive order. That was a big deal. However, since then,
Trump has flip flopped about TikTok. Whether that has anything
to do with a billionaire TikTok and who made significant
campaign donations to the GOP, I can't say. But certainly
Trump's own explanations of well, you know, kids really like it,

(24:09):
that doesn't seem to be a compelling reason for his
one to eighty degree turn on TikTok. Anyway, I have
no clue if the current Supreme Court would side more
on what past Trump said back when he was president
or what he says now when he's trying to be
president again. But that decision will happen after the election,

(24:29):
so maybe that could have an impact, I have no
way of knowing. Yesterday, the Food and Drug Administration here
in the United States issued a press release that announced
that Apple AirPods pro headphones have qualified to be labeled
as over the counter hearing aids software devices, and that
is a first. It's the first over the counter hearing

(24:51):
a device or software device I guess that has ever
received that designation here in the United States. Previous hearing
aids have not been over the counter. You had to
go through a pretty lengthy sequence of visits with various
doctors before you could get a medical device to age
your hearing. With Apple, users can customize the performance of

(25:13):
their AirPods to meet their hearing needs, assuming that they
have mild to moderate hearing impairment. Beyond that, they would
still need to go through the medical pathway. But folks
who have experienced mild to moderate hearing loss can order
directly through Apple without first going through the whole medical system,
so they don't have to seek out an examination from

(25:35):
their doctor and then you get referred to audiologists and
all that kind of stuff. For people who have insufficient
time or health coverage, this is a huge deal. The
FDA's designation lends credibility to this technology. There's no shortage
of tech out there that claims to be helpful in
various ways in the medical field, but if the products

(25:56):
lack the FDA designation, then there's no authority out there saying, yes,
this stuff works for that intended purpose. Now, I am
not an Apple user, but I do think this is
a great day for folks who have mild to moderate
hearing loss and gives them a lot more options. Maybe
we'll get an Android compatible candidate that also meets FDA

(26:16):
requirements to receive this sort of designation. I would find
that pretty helpful. I mean, I went to way too
many loud music shows when I was in college, and
I'm certainly paying for that now. This week, Sony announced
that starting on November seventh, you can order yourself a
brand new PS five Pro. This model has more oomph
than the previous PS five consoles, so earlier gamers had

(26:40):
to make a choice. They could play games at the
highest visual settings enabled, but they would do so while
taking a hit on stuff like frame rate, so the
performance of the game would take a bit of a hit,
but it would look gorgeous. Or they could optimize for performance,
which means the graphics wouldn't look is pretty, but the
game would run much more smoothly. So this new PS

(27:03):
five Pro model is meant to eliminate that problem by
providing enough power to run games at their higher visual
settings without impacting the performance, and it would only set
you back seven hundred US dollars, obviously priced differently in
different regions. On top of that, however, this particular Pro

(27:24):
model does not have a disk drive. It's digital only,
so if you wanted a console that could also play discs, well,
then you would need to buy an external drive for
the PS five and connect it to the PS five
Pro to get that capability. I think that's kind of
rough for folks who depend upon a game console to
be a multitasker. Now a lot of game publishers have

(27:47):
ditched physical media in favor of digital downloads, So for
a lot of games, there is no physical disk to
buy anyway. The only way to get the game is
to download it digitally. But there's still a lot of
us out there who are either still collecting physical media
like movies and TV shows on disc, or we have

(28:07):
recently gone back to physical media after we got tired
of streaming services dropping the films and TV shows we
love from their respective libraries. I fall in that camp.
For a while, I was digital only, but eventually I
did get fed up with constantly having to play leap
Frog to figure out which service has the movie that
I wanted to watch on it. Now forget it. I'll

(28:29):
just buy a copy of the movie so that way
I always have it if I want to watch it. Well,
consoles have obviously not served just as game centers, they've
also served as physical media players, so ditching the drive
is tough on those of us who want both. Now.
I've heard that sales of PS five external disk drives
have spiked in the wake of this announcement, and also

(28:50):
a lot of analysts have interpreted Sony's move to indicate
that future consoles will likewise leave off the disk drive,
it just won't be part of the system. Analysts are
also cheesed off that the seven hundred dollars price tag
is pretty hefty, considering you do not get a disk
drive in that model. Now, you could argue, yes, the
microchips are more advanced, they're more powerful, but it's still

(29:14):
hard to feel like you're not paying more for diminishing returns,
particularly if you're someone who can't really see the difference
in the various graphic settings to begin with, like me,
I have trouble seeing much of a difference between the
highest settings and the ones that allow you to play
with little impact to gameplay. Now, I don't doubt that
there is a difference. I'm sure there is, but my

(29:35):
television isn't large enough and I don't sit close enough
to it to be able to pick out those differences.
So I suppose one argument supporting the PS five pro
is that in the future there will be PS five
titles that will require that horsepower to run well. But then,
assuming that there will be future PS generations, we're likely

(29:56):
at the halfway point for the current console's life cycle,
so there's a lot to balance out when making a
decision as to whether you're going to buy one or not. Okay,
I've got a few more news stories to get through.
Let's take another quick break to thank our sponsors. So

(30:19):
Microsoft has held another round of layoffs for its Xbox division. Reportedly,
some six hundred and fifty employees are going to lose
their jobs as part of these layoffs. That sucks. Sorry
for anyone out there affected by that. That stinks. This
is according to reporting from Matthew Schomer of Game Rant.

(30:39):
He has an article titled Xbox has reportedly been told
to go dark today. So by go dark, what Schomer
means is that allegedly the Xbox division has been directed
to say nothing on social media, and this is an
effort to sidestep the reaction to this layoff decision. So,
according to Microsoft, game being CEO Phil Spencer. The layoffs

(31:02):
are a continuation of the restructuring that has had to
happen in the wake of Microsoft acquiring Activision Blizzard. You
might recall that particular acquisition was a very lengthy process.
It took way longer than what was anticipated, and it
was not guaranteed to work out because there were various
regulators around the world who are raising concerns that the

(31:24):
acquisition would lead to a decline in competition in various
gaming markets, most notably in the cloud gaming market. You
might also remember that Microsoft has already held rounds of
layoffs that the company claimed to be connected to this
restructuring in the wake of the acquisition. When Microsoft did
this back in May, the company became the target for

(31:44):
a lot of online criticism. Tom Warren of The Verge
posted that Microsoft has directed employees to avoid posting social
media in order to try and prevent a similar online
backlash situation this week. I'm not sure that's really going
to work out for them. The gaming industry as a
whole has been hit with a lot of layoffs in

(32:04):
the last year and a half, and it concerns me
as I know there are thousands of talented people who
are following their passion for video games and you know,
making a career out of that passion, and they have
subsequently found themselves out of work, which again stinks. I
really hope anyone affected by this lands on their feet

(32:25):
very quickly. Boeing continues to get hit by bad news.
Union workers who are machinists at Boeing have voted to
authorize a strike after more than ninety four percent of
union members rejected a proposed contract agreement, which would have
seen a twenty five percent pay raise over the course
of four years. Interestingly, the union leaders who were at

(32:47):
the negotiating table with Boeing had prompted members to agree
to this, but the union as a whole disagreed with
the team that negotiated this agreement and said, no, this
is not good enough. The strike effectively began this morning,
one minute after midnight, and thirty three thousand machinists are

(33:08):
represented in this union, which means all work on things
like Boeing aircraft has come to a halt. Now, a
twenty five percent raise ain't nothing right, So you might
be saying, what are the workers expecting, Well, they had
been asking for a much more aggressive raise schedule. They
wanted forty percent increase in raises over the course of

(33:32):
three years, So they wanted more money, and they wanted
it on a shorter timeline. They say, it's that the
twenty five percent is not enough to compensate for how
employees have been made to make concessions regarding compensation and
pensions since two thousand and eight. So they say that,
you know, the previous sixteen years went with no raises

(33:54):
at all, and that a twenty five percent increase would
not put them on equal footing of where they would
be had they been and getting year over year raises
the way you would typically expect. So what they're saying
is this isn't good enough. It doesn't bring us to
where we should be, and it doesn't address the other
issues that we have. So we're going on strike pretty

(34:15):
rough situation last up, and then we're going to get
to some reading recommendations. Jared Isaacman and Sarah Gillis became
the first two private citizens to go on an EVA
or extra vehicular activity in spice or space. This is
also known as a space walk, and they did this
as part of Polaris Dawn, which is a SpaceX mission

(34:39):
that carried the two private citizens up to space along
with two other crew members, so four in total. The
pair each spent about eight minutes out there in space
in their space suits. They were not fully outside the capsule,
so they weren't like walking around or drifting around the capsule.
Their legs were still inside the cap so their upper

(35:00):
half was kind of poking out. They did try different
experiments to use tools and test their spacesuits maneuverability and
how useful it would be in the instance of actually
doing a spacewalk where you're trying to perform some sort
of engineering task. Everyone obviously had to wear space suits
because there's no airlock in this SpaceX Dragon capsule. The

(35:23):
whole cabin had to be depressurized to allow for this exercise.
But the exercise was a success, and it's a huge
achievement for all the people at SpaceX who have been
working for years to get to this point. I know,
I get really critical of Elon Musk and his various antics,
as well as the companies he oversees, but there is
no denying that the folks at SpaceX have hit some

(35:45):
pretty impressive milestones. They were incredibly ambitious, and they were achieved. Hopefully,
one day people who aren't billionaires or SpaceX engineers will
get a chance to have a similar experience. Right now, well,
it remains well out of range of let's say, typical people.
I almost said ordinary, but that's making a judgment against

(36:09):
SpaceX engineers. I don't feel bad judging billionaires. They can
afford it. I'll judge billionaires all day long and they'll
be just fine. Okay, now we're at recommended reading time.
I've actually got four articles I want to mention. There
were so much going on this week. Well, three of
the articles I have to mention are all from Here's
no surprise, Ours Technica. Again, I have no connection to

(36:31):
Ours Technica. I'm just a fan. So first up, we've
got Kevin Perdy's Ours Technica article. It's titled Music Industries
nineteen nineties. Hard drives, like all HDDs are dying. So
this piece details how a data storage company has discovered
that around twenty percent of the hard disk drives that
were sent to them by media companies are ultimately unreadable.

(36:54):
And that really makes it clear that poorting data over
to other storage methods needs to be a priority for
anyone who's still relying on legacy hard disk drives from
decades earlier. As the equipment fails, it becomes harder and
sometimes impossible to retrieve the data that's been stored on them,
and so we run the risk of irretrievably losing some

(37:14):
of the information that could include things like master tracks
for some of the most popular songs of the past.
This is actually reminding me that I should probably get
some cloud storage solutions for some media files I currently
have stored on an external HDD, but that's a meat problem.
Next up, there's a piece by Rebecca Valentine of Ign
about how the entire gaming staff of a video game

(37:37):
development studio has recently resigned. That studio is on a
Purna and the reasons behind the mass resignation are pretty interesting.
So the article is titled Anna Perna's entire gaming team
has resigned, So go check that out. Eric Berger back
at Ours Technica has an article titled the future of

(37:57):
Boeing's Crude as in crwed spaceflight program is muddy after
Starliner's return, so it follows up on the tale of
the beleaguered star Liner spacecraft, which obviously it experienced malfunctions
as it was nearing the International Space Station. It ultimately
returned back to Earth safely, but without its human crew

(38:19):
aboord it. They remain on the ISS for the time being,
so check that out. It's kind of bringing into question
where does Boeing go from here? How does NASA handle this?
Will the two organizations be able to move forward or
is it really in limbo? Now? Finally, there's Jennifer Oulette's

(38:41):
article in Ours Tetnica. It's titled Meet the Winners of
the twenty twenty four iig Nobel Prizes. Now. I did
a tech Stuff episode about the ig Nobel Prizes a
while back. If you're not familiar with the Ignobels, these
prizes celebrate weird and unexpected achievements in various fields, usually
in science and technology, but also other areas as well,

(39:04):
and the general philosophy of the prizes is that they
go to projects that first make you laugh, then they
make you think. So check that out as well. Maybe
I'll do a follow up episode to my ig No
Bells to just talk about some of the stuff that won.
Often I feel like it's better for me to wait
and do those in roundups of multiple years because often

(39:26):
a lot of those projects are only tangentially related to tech,
and while they are funny and interesting, they don't necessarily
meet the rubric of tech stuff. I've been listening to
a lot of the Besties podcast It's again. I have
no connection to the Besties, but it's a show about
video games and stuff, and they use the word rubric

(39:46):
a lot, especially in their Patreon episodes, so it's kind
of gotten stuck in my vocabulary recently, just from osmosis.
I guess that's it for today's episode about tech news
for the week ending scept Member thirteenth, twenty twenty four.
Happy Friday the thirteenth. Everybody be safe out there. I
hope you're all well, and I'll talk to you again

(40:08):
really soon. Tech Stuff is an iHeartRadio production. For more
podcasts from iHeartRadio, visit the iHeartRadio app, Apple Podcasts, or
wherever you listen to your favorite shows.

TechStuff News

Advertise With Us

Follow Us On

Host

Jonathan Strickland

Jonathan Strickland

Show Links

AboutStoreRSS

Popular Podcasts

2. Dateline NBC

2. Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations.

3. Crime Junkie

3. Crime Junkie

If you can never get enough true crime... Congratulations, you’ve found your people.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.