All Episodes

September 7, 2023 36 mins

How were hackers able to access important email systems belonging to big targets like the US Department of State? It turns out it was a perfect set of circumstances that Microsoft failed to address. Plus, we have a bunch of AI news, the FAA okays delivery drones flying beyond line of sight, and lots more! 

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tech Stuff, a production from iHeartRadio. Hey there,
and welcome to tech Stuff. I'm your host, Jonathan Strickland.
I'm an executive producer with iHeartRadio. And how the tech
are you. It's time for the tech news for let's
see Thursday. That's it September seventh, twenty twenty three. Sorry,

(00:29):
holiday weeks always mess me up, but let's get to
the news. First up, we have an update on the
European Union's efforts to push back against big tech companies
via the Digital Markets Act or DMA. The European Commission
has designated six tech companies as quote unquote gatekeepers of

(00:49):
the digital market in the EU. Now. To qualify as
a gatekeeper, a company has to have more than forty
five million end users per month, more than ten thousand
business users per year, to be active in at least
three member states of the EU, and to have either
a market cap of at least seventy five million euro

(01:10):
or a turnover of seven point five billion Euro. The
six companies that have the honor of being called gatekeepers
include Alphabet that's Google's parent company, Amazon, Apple, Meta, Microsoft,
and byte Dance, which is TikTok's parent company. Just in
case you've forgotten now, I've talked about this a little

(01:31):
bit this year about how Amazon has tried to fight
this designation as gatekeeper. Amazon's argued that doesn't really qualify
under those terms, but it appears that those arguments have
fallen on deaf ears. The EU has singled out some
specific platforms and services in these companies that will have
to comply with new rules under the DMA. Those platforms

(01:55):
include stuff like Apple's Safari browser or iOS, or Google's
Android or Chrome browser, Google Search, Google Ads, metas, Facebook
and Instagram, and messenger services. Those are just examples, that's
not all of them. In fact, the EU identified twenty
two services across the Big six, including some for the

(02:17):
companies I mentioned that I didn't go into. The rules
are meant to give EU citizens more control over the
services they use. So, for example, one rule states that
a company like Apple has to make all the pre
installed apps on iOS devices uninstallable and replaceable with third
party applications. So in other words, if you get an iPhone,

(02:40):
then you should be able to remove all the pre
installed Apple apps and just replace them with third party
apps if you want to. Now, that is to put
it lightly antithetical to Apple's approach, and there are a
lot of other rules. That's just one example, and they
really do push back hard against some of the key
strategies that tech company have been employing over the last decade.

(03:02):
So this is a big deal. The gatekeepers have six
months to comply with these rules. If they are found
in violation of the rules, the EU can find the
company up to ten percent of its total worldwide turnover.
Remember we're talking like seven and a half billion at
least here, so ten percent of that. But if a
company continues to break the rules, that actually can go

(03:25):
up to a twenty percent fine, which is, you know,
twice as much if my math is correct. Also, the
Commission asserts that it has the authority to force companies
to sell off parts of their businesses if they're unable
to apply comply rather with the DMA. So I'll be
really interested to see how the gatekeepers respond to this.

(03:45):
My guess is unfavorably. Microsoft has released information about an
infiltration attack that ultimately had a Chinese backed hacker group
called Storm zero five to five eight gaining access to
email systems belonging to more than two dozen high value
targets we're talking about like big companies and also big

(04:07):
government agencies, And it sounds like the way this attack
happened was a convergence of a whole bunch of stuff
that wasn't supposed to happen, and it was like a
perfect storm situation for the hackers. So first up is
an expired account consumer signing key, kind of like a
key to get into a system. I mean, that's kind

(04:29):
of what this is. And the key could but should
not have been able to but it could create tokens
that would allow access to stuff like Microsoft's Azure service.
Azure is Microsoft's cloud computing platform. But the beginning of
the story actually dates all the way back to April
twenty twenty one, So see, the system that this key

(04:52):
was on was meant to be a very heavily protected system.
The only person allowed to access this workstation was a
specific engineer who had been thoroughly vetted by Microsoft because
they were working in a production development environment that other
people were absolutely not supposed to be able to access.

(05:13):
So that also meant the workstation was not allowed to
have several basic applications and services on it because those
are frequently attack vectors. For hackers. So there was no
email on this thing, no web access on it, collaboration
tools couldn't be on it, that kind of thing. Essentially,
the workstation was approaching air gap status, although it still

(05:36):
had network connectivity, so it wasn't truly air gapped. If
it had been, then we'd have a different conversation going
on here. It also had multi factor authentication protection, and
all of this makes you wonder, well, then, how the
heck did the hackers get access to this If it
was such a heavily protected workstation, how did they manage
to get this key that had already expired, and how

(05:59):
then did they use to get access to all these
different organization's email systems. Well again, back in twenty twenty one,
in April, this particular workstation crashed and so Windows performed
a crash dump process. So this is when the computer
takes the data that's in the computer's memory and then

(06:19):
saves that data to long term storage. And the reason
for that is that someone can later look at the
data and kind of see what actually happened, what caused
the crash. Is there something that needs to be addressed
to prevent it from happening in the future. Well, it
turns out that this expired account consumer signing key was
in the computer's memory at the time of the crash,

(06:42):
so it was part of this crash dump. Now, it
was supposed to not make it through the next step
because Microsoft scans data to look for things that could
potentially be a security vulnerability and to remove them before
transporting the data to, you know, the the bug development group,

(07:05):
and that step failed. It did not detect the fact
that there was this key that was just sitting there.
It's like a shiny key hidden in a big pile
of dirty, old data, and so they didn't think that
there was anything problematic there, so they moved it on
over to the debugging environment. And sometime this year, hackers

(07:29):
were able to compromise a different Microsoft engineer's corporate account
and that gave them access to this debugging environment, which
normally wouldn't have anything in it that would be particularly dangerous,
but they were able to find that key, that gleaming
shining signing key in that crash dump data. And then

(07:51):
how could they then use an expired key, an expired
consumer key at that it wasn't even an enterprise key.
How could they use that to forge tokens that would
allow them to access enterprise accounts on Azure. Well, Microsoft
revealed that in twenty eighteen, the company created a new
framework that messed things up. This new framework couldn't actually

(08:13):
validate signing keys properly. It couldn't really tell the difference
between a consumer key and an enterprise key, and unless
system administrators had taken some pretty extraordinary steps, assuming that
Microsoft had not addressed this and automated the validation process,
which is a weird assumption to make, Like you would
assume that all of this would be taken care of

(08:35):
on Microsoft side, so there was no reason to take
these extraordinary steps. Well, unless you had taken those steps,
it meant that your system was potentially vulnerable to this attack.
And that's in fact what happened. That was the perfect storm,
and a lot of it ends up being the sole
responsibility of Microsoft itself. It actually gets way more technical

(08:57):
than the overview I just gave. I gave a very
very oversimplified version of what happened, But if you would
like to read up on the details, I recommend checking
out an article in Ours Technika. It's titled Microsoft finally
explains cause of Azure breach and engineer's account was hacked.
It goes into more detail and explains on a technical

(09:20):
level what was happening. So if you want to check
that out, I recommend it. Okay, we have now come
up to the section of tech news I like to
call AII because it's all about AI and so leading
the charge is Go Media that's the parent company of Gizmodo. So,
according to the Verge, on August twenty ninth, Geomedia fired

(09:43):
the entire staff of Gizmoto and Espanol, which, as I'm
sure you've gathered, is the Spanish language version of Gizmodo.
So the company replaced the staff with AI translators so
they would translate the English language articles into space Now.
Articles on the Spanish language site include a bit at

(10:04):
the end of the article at the bottom of the
page that says something like contents have been automatically translated
from the original. Due to the nuances of machine translation,
there can be slight differences. Well, readers have reported some
slight differences and some major ones too. They say that
this tool is far from perfect. They say there are
examples of articles that start in Spanish but then at

(10:27):
some way through the article, they change to English and
stay that way. As many have pointed out in recent months,
the shift toward using AI for content creation or even
auto translation isn't necessarily as cost saving or labor saving
as you might first imagine, because you have issues with
factual errors, technical hiccups, the AI hallucinating and inventing information

(10:53):
that is incorrect, and that means that humans still have
to pour over AI's work to make sure that the
work is right. And at some point you have to
ask yourself the question, isn't it just easier and smarter
to just use humans rather than to have to deal
with all the instances of robot goofa mups? But then,
what do I know? Just a dumb human. Earlier this year,

(11:15):
a music creator with the handle ghost writer made headlines
when they released a song they had written called Heart
on My Sleeve. This was the song that was voiced
by an AI version of real human artists Drake and
The Weekend, and that prompted a big old brew haha
in the music industry. Music companies argued that it was

(11:36):
a copyright violation. That doesn't track for me at all
because ghostwriter apparently wrote the actual song. So Ghostwriter has
the copyright for the song, at least the lyrics and
presumably the music, and you can't copyright a person's voice.
So I don't know where you could actually argue, you know,

(11:59):
in a valid way, that this is a copyright violation.
But the issue raised a lot of questions and it
pointed out that there are some big old gaps that
we have in IP law as far as AI generated
content goes. And now The Verge reports that one Ghostwriter
has released a new song called Whiplash that features AI
impressions of Travis Scott and twenty one Savage and two

(12:24):
Ghostwriter has posted a message stating that they're going to
release a record of music with impersonated voices if the
artists who were impersonated sign off on it. So it's
not that Ghostwriter is saying this is going out. You know,
I'm gonna sell this whether you like it or not.
Ghostwriter saying, if you consent, then we'll start selling this

(12:44):
and I'll direct royalties to you. So you're gonna make
money and you never had to do anything now. According
to Harvey Mason, junior head of the Recording Academy, the
other thing Ghostwriter wants to do is a distinct possibility,
which is ghostwriter wants to submit Hard on My Sleeve
for Grammy consideration. The Recording Academy is the organization that

(13:07):
essentially runs the Grammys, and so the head of the
Academy says, well, from a creative standpoint, he totally or
they totally can submit this song for consideration because it
was written by a human. Might not have been performed
by one, but it was written by one. That's all
that matters. However, there are other metrics that the song

(13:28):
has to meet in order to qualify for consideration. These
are metrics that involve things like distribution, and because so
many platforms pulled the song after being pressured by the
music labels, chances are this song does not meet those qualifications.
So I don't think it's going to be considered for

(13:48):
a Grammy, not because of the creative side, but because
of the business side. But according to the head of
Recording Academy, it totally could be Grammy eligible. Interesting. Okay,
We've got a lot more stories to cover before we
wrap up today, but first let's take a quick break
to thank our sponsors. NBC News reports that platforms like

(14:21):
Instagram and TikTok are being inundated with sexualized AI generated content. Essentially,
we're talking about chatbots and AI generated images. Here. The
news agency found thirty five app developers running more than
one thousand ads for these kinds of services on Meta's
various platforms, and that's notable for a few reasons. A

(14:44):
big one is that Meta has really cracked down hard
on sexualized content from human beings, so people like sex
workers have been pushed off the platform. Companies that make
sexual aids and sexual toys have been deny aid the
ability to run ads on the platform. And yet here
we have this influx of AI powered services, adult oriented

(15:09):
services that aren't just appearing on the platform. They're running
in ads like this is this is a paid service there,
they're working with Meta. And yet despite the fact that
Meta has cracked down on this in these other situations,
they don't seem to have done it here. The same thing,
or a very similar thing is happening over at TikTok.

(15:31):
NBC only found fourteen app developers running ads over there, however,
and there's a lot going on here. Obviously, there's this
double standard that disenfranchises you know, human beings who are
working in this space, but doesn't do that to AI.
And then there's the concern about security and privacy. This
is my big concern because we already know that AI

(15:54):
can be real loosey goosey with your private information, right Like,
if the AI model is taking the stuff you communicate
to the model in account, then you are feeding that
AI model, which then can regurgitate the stuff you fed
to it to other people. So maybe you don't want
to express your most intimate desires and preferences to an

(16:17):
AI chatbot. It just might not turn out so well
for you in the long run anyway. NBC News rightfully
raises the question as to why these AI powered ads
seem to be getting a pass when it would be
against the rules for a human to post something similar.
I don't know, maybe the robots already got to them.
Starting in November, Google will require political ads that rely

(16:40):
on AI for content generation to prominently disclose the involvement
of the AI. Specifically, Google will require ads that contain
quote synthetic content end quote that appears to show realistic
people or events to label it as such. So let's say,

(17:00):
for example, that you had a political ad and it
shows a certain politician appearing at a certain event, and
that event is absolutely overflowing with a huge, enthusiastic, supportive audience.
But perhaps in reality they did appear at an event,
but maybe it was poorly attended. So AI has been
used to generate this crowd. Well, the ad would need

(17:22):
to disclose that it had used AI to augment these images.
I don't think it would have to get as granular
as to say what actually was done to the images,
but it would have to alert you that AI was
used as part of that ad generation. Now there is
a threshold here. If someone were just using AI to

(17:42):
do some minor tweaks like removing red eye or something
like that, they don't have to disclose that. That doesn't
meet the criteria. But if you're doing something like making
someone appear someone that somewhere where they weren't or with
someone who was not there, anything like that, that would
have to be disclosed. All right, we are now done

(18:04):
with AI for this episode, but I'm not done with Google.
YouTube is removing some control options for content creators when
it comes to ads. So right now a content creator
has a decent amount of freedom of where they will
allow ads to run against monetized content. So, for example,

(18:24):
a lot of ASMR artists will allow pre roll ads
these are the ads that play before a video plays,
but they turn off mid roll ads, which of course
play in the middle of a video. Like the video stops,
you get an ad and then the video keeps going,
kind of like the ads in this podcast. Also, they'll
turn off post roll ads. Those are the ads that

(18:45):
play after a video has finished playing. And the reason
why ASMR artists turn off mid role in post roll
ads as a rule is because they can be really
jarring to listen to if you are using ASMR to
try and relax or to lull yourself to sleep, because
chances are having some obnoxious ad playing right after the

(19:06):
video ends will kind of spoil the effect anyway. YouTube's
changes mean that creators can select whether or not ads
can play on either side of a video, but they
don't get to choose whether those ads will be pre
roll or post roll or both. They can say yes,
I will allow ads to play before or after, but
they don't get to say which one. Just it's before

(19:29):
or after collectively, and YouTube gets to decide whether the
ads will play before the video or after the video
or both. They also are not going to be able
to choose whether or not the ads are skippable or
non skippable. There are also going to be some other
changes to things like mineral ads as well, but they're
not quite to the extreme that I just mentioned. And

(19:51):
for a lot of content creators this may not be
that disruptive, but for folks in the meditation or ASMR
or relaxation spaces, it's causing a lot of anxiety. Heads
up to Chrome users, Google has been rolling out an
enhanced ad privacy feature. Some might say this feature is
perhaps a bit misleading because the name when have you think, Ah,

(20:15):
this feature will help keep my data private from advertisers.
Now it kind of does that, but it kind of doesn't.
So this ties into an application programming interface that's called Topics,
the Topics API, and what's going on here is that
Chrome uses your browser history to serve you targeted ads.

(20:37):
So based upon the kinds of sites you visit and
how often you visit them and how long you stay there,
Chrome will be able to serve ads to you that
it judges are more relevant to your interests. So Topics
is supposed to help replace third party cookies something that
Chrome will stop supporting in the not too distant future.
Cookies can act as trackers for your behavior across the web.

(21:01):
So the benefit of topics, according to Google, is that
it doesn't hand your browser history over to advertisers. It
doesn't explicitly say, oh, you went to site X, site y,
and site z. Instead, what Google does is hold on
to that information and it just indicates the types of
stuff you're interested in. So for me, it might tell

(21:24):
an advertiser something about me without going into details. So
for example, it would not say, oh, yeah, Jonathan, he's
on the I Fix It page like five times a week.
It wouldn't say that. Instead, it might say Jonathan is
interested in technology, gadgets, dioy repair, et cetera. And you
can argue that's an improvement, right, Like, it's not as

(21:44):
explicit as listing out all the different specific sites you
went to. But the way Google has rolled this out
has left some people upset because users are seeing a
pop up that alerts them to the enhanced privacy from ads.
But what this really means is that it's already opting
you into this browser history rec recording feature that Google

(22:08):
that you're enabling this, you're enabling Google to use your
browser history as a way to target ads to you. Like,
that's that's what happens. If you just hit got it,
it just it opts you in. And people are saying, oh,
it makes it sound like by the name of the
feature that if you hit got it it opts you out.
The opposite is true. It opts you in, And in

(22:29):
order to opt out, you then have to go to
your browser settings, go to the privacy settings. There's a
selection called ad privacy. If you do that, then you
can go in and turn off this feature, but you
have to do it manually at that point because you've
already said got it. Well they got me because I
said got it. I probably didn't read it properly. But

(22:51):
even if I had, like, there's a chance I would
have just thought, oh, by clicking got it, it means
I've opted to be out of this program, when in
fact the opposite was true. And when I went into
the settings, that's what I saw. The settings were all
turned on and I needed to turn them off manually.
So just want to let you folks know that, so
that if you use Chrome, you can go into those

(23:12):
privacy settings check that ad privacy. See if it's on,
and you know, if you have no problem with that,
that's fine, just leave it on. But if, like me,
you thought you were opting out and it turns out
you were opting in, you might want to make some changes. Okay,
I've got some more news stories to come, but before
we jump into all of that, let's take another quick break. Okay,

(23:42):
quick story here. The FAA is extending how far UPS
is allowed to fly delivery drones. So this week the
FAA updated its rules and will allow deliveries that go
beyond line of sight. So previously, an operator or a
spotter need to be able to maintain eyes maintain sight

(24:04):
on a delivery drone as it was dropping off a package, which,
of course, you could argue eliminates the benefits of having
a delivery drone in the first place. So now, for
the first time, companies will not have to employ spotters
to keep an eye on a drone as it makes
its way to the drop off. So the days of
robots delivering our small packages are closer than ever. Next up,

(24:25):
have you ever had the problem of running out of
disk space? Maybe it was on a video game console
and it told you that no, you cannot download Starfield
because your console's disk drive is already full, so you're
gonna have to uninstall some stuff to make room, or
maybe you've encountered it on a work or home PC. Well,
if you have, just be comforted in knowing that you

(24:47):
are not alone. Because last week, a disk capacity issue
shut down Toyota, like all of Toyota's assembly plants in
Japan had to shut down, and some of the company's
servers just detected that there was insufficient disk space to
continue operations, and so operations stopped continuing. This raised concerns

(25:10):
that perhaps the auto company had been the target of hackers,
but in fact it turned out to be a pretty
simple problem with a pretty simple solution. Toyota transferred the
data to a server that had much more storage capacity
and then things got back on track. But it really
does show that the little frustrations that can irritate us
as end users can cause much larger problems in other contexts.

(25:32):
The Mozilla Foundation recently conducted a study on the privacy
practices of certain segment of technology, and they found that
this particular sector of technology underperforms across the board when
it comes to privacy, and that segment is drum roll, please,
the automobile industry. So, according to the Mozilla Foundation, ninety

(25:55):
two percent of auto manufacturers give drivers very little or
no control over how their private information is gathered and used.
More than eighty percent of them will share driver data
with outside partners. Out of the twenty five car companies
that the Foundation studied, not a single one of them
even met the minimum privacy standards that they had established

(26:20):
they being the Foundation, I should say. Plus, the Foundation said,
these companies aren't just collecting and sharing data, they're collecting
more information than they need in order to provide services
to drivers. So this reminds me of how platforms like
Facebook used to let developers get access to all sorts
of data, even if their app didn't need that data

(26:41):
to operate. That's part of what led to the whole
Cambridge Analytica scandal in fact, and it turns out a
very similar thing is playing out in our vehicles to
an extent. Anyway. Some other stuff that the study discovered
includes the fun fact that more than half of auto
manufacturers are willing to share driver information to law enforcement
upon request. If you can trast that with Internet based platforms,

(27:05):
they are known to exert a pretty decent deal of
effort to try and keep user data secure, because otherwise
you lose the trust of your user base. But the
car companies don't seem to have that same perspective. Seventy
six percent of those auto companies claimed that they actually
have the right to sell driver information to data brokers,

(27:28):
like they just have that right inherently doesn't require you
to consent to it. If you're wondering which company scored
the lowest on the test results, that would be Tesla.
And I know I have a tendency to criticize Tesla
a lot on the show, but it's stuff like this,
I would argue that kind of justifies that approach. As

(27:48):
for what you, as a driver can do about this,
the answer is not a whole lot. Mozilla Foundation has
called out to car companies to make substantive changes to
how they collect and utilize data because this problem is
too big for individual drivers to tackle in any meaningful way.
Hey do you remember Clubhouse. That's the app that debuted

(28:09):
in twenty twenty one made a real big splash. It
let people create virtual spaces where they could broadcast audio
to an audience. So it's kind of like you could
have a live streaming podcaster, or you could be even
a DJ if you wanted to be. You could have
shows with guests and allow folks to listen in. And
initially Clubhouse had this air of exclusivity. Originally the app

(28:34):
was invitation only and it was also limited to the
iOS platform at first, and there's nothing like an exclusive
club to make people want to become a member. But
the shine wore off of Clubhouse, and while the app
did have sort of a meteoric rise, it faded from
conversations not too long after that. Now the company has

(28:54):
laid off about half its staff and it's looking to
reframe Clubhouse as more of a message app than a
live audio app. So rather than being a bunch of
virtual town halls that could be hosted by anybody you know,
from a no one like me to an actual celebrity,
the focus now is for Clubhouse users to form groups

(29:15):
with people they actually know, like real life friends and family.
In fact, it sounds a lot like what Twitter was
intended to be when it was first launched. It wasn't
thought of as like broadcasting to everyone. The use case
was more like you would follow people you actually know,
so that you could keep up with what they were doing.
Now users will be leaving audio messages for each other.

(29:38):
I'm not really sure how this is at all different
from other messaging apps that also incorporate audio elements, and
I'm not sure if Clubhouse can actually leverage this approach
to regain relevancy in the market. But then I never
actually joined Clubhouse because I'm not one of the cool kids,
So I'm out of the loop here. I don't know
what I'm talking about. I guess now. I was going
to talk about a secret rocket launch at Cape Canaveral

(30:02):
this week, one that isn't part of SpaceX, it's not
part of NASA, it's not part of any other like
space agency. But it turns out this secret launch was
scrubbed and didn't happen. Technically, it was a secret for
at least a while, but folks figured out pretty quickly
that it was a US military operation and that it

(30:23):
was most likely meant to conduct a test launch of
hypersonic missiles. So as that term suggests, these are missiles
that travel faster than the sweet of sound, and with
this incredible speed and their maneuverability, they would be more
capable of avoiding anti missile weaponry. But today, or rather yesterday,

(30:44):
I should say, the military scrubbed those plans. They also
acknowledged that this was meant to test hypersonic missile technology,
but the reason for scrubbing was very vague. It just
said that during pre flight checks they had to make
the call to scrub launch. This would have been the
first surface test of a US hypersonic missile, but other

(31:07):
countries like China, which actually leads the world in this technology,
and Russia have already deployed, in Russia's case, even used
hypersonic capable weaponry. We haven't heard the last of this,
I don't think or I mean maybe we won't hear it.
We'll see it. I mean we'll hear it, because hypersonic
you'll get a sonic boom afterward, and then an actual

(31:27):
boom if it's a missile. I'm getting off track now.
If you were going to remake Alanis Morissett's song Ironic Today,
you might include this last news item there. So, Rockstar Games,
the company behind popular franchises like Grand Theft Auto, has
included some of the same technology the company has previously

(31:47):
campaigned against in some of its games. That's now selling
on Steam. So this all has to do with Digital
Rights Management or DRM. So the purpose of DRM is
to prevent or discourage piracy, though advocates often argue that
pirates will find ways around DRM, so then the only
people who have to deal with DRM are actual valid customers,

(32:11):
which means you're just making the experience worse for people
who were already paying for the experience, and the people
who didn't want to pay for the experience, they just
found ways to get around the protections you had put
in place. And anyway, a hacker group called Razor nineteen eleven,
years and years and years ago created a bunch of

(32:32):
cracks for Rockstar Games, for certain titles in Rockstar Games.
So these were pieces of software that are meant to
get around DRM. Really just files that allow you to
bypass DRM. And the interesting thing is that now Rockstar
has put some of those old titles for sale on Steam.
But Rockstar needed a way to get around this DRM

(32:55):
that they themselves had put on these old games, and
to do that, what the company apparently has done is
to include some of the very same cracks made by
those hackers. A hacker group that Rockstar Games was very
gung ho on going up against back in the day,
and now they're using the hacker's own tools because these

(33:16):
old games have DRM on them that otherwise would make
the playing experience suboptimal. Bleeping Computer uncovered this and has
screenshots of code that indicate that, yeah, some of those
cracks do appear in certain Rockstar game files, which is
pretty wild stuff. Okay, I've got a couple of recommendations
of articles for you all this week. First up is

(33:38):
an article and Wired. It's titled The Burning Man Fiasco
is the Ultimate Tech culture Clash. So obviously this goes
into detail about how thousands of people found themselves stranded
in the desert at the Burning Man Festival while torrential
rains move through the area. But it's also about how
a cultural subset essentially appropriated and you know, took over.

(34:01):
They hijacked an art festival, and how that in turn
has changed the festival itself, so well worth a read.
The second recommendation I have is titled Airbnb bookings dry
up in New York as the new short stay rules
are introduced. I think this is a really interesting read.
It's in the Guardian and it's only partly tech related.

(34:23):
I would argue a lot of people put Airbnb and
tech company status, so it kind of fits in that regard.
But it's also really about how the tech startup culture
that's centered around disruption can be i can have some
really negative consequences. So in this case, the disruption was

(34:45):
meant to be to the hospitality industry, right, that's what
Airbnb is taking aim for. They're taking aim at like
hotels and cabin rentals and things like that in an
effort to you know, democratize it to some extent and
also just to make a a ton of cash in
the process. But New York. The response to New York

(35:06):
was that we need to make up rules to curtail
this because people who are owning properties rather than selling them,
they are renting them out for these short stays. And meanwhile,
we have a housing crisis in the city. There's not
enough housing for the people who need it. And part
of the reason is because there are all these landlords

(35:27):
who are renting out these spaces in short term. So
we're going to make it really really hard for them
to do that. The rules are making it very challenging
to run an Airbnb kind of business in New York
City in order to address this issue. So I just
thought it was interesting to see the interplay between disruption

(35:48):
and legislative response, so I recommend that one as well. Okay,
that's it for the tech News for Thursday, September seventh,
twenty twenty three. I hope you're all well, and I'll
talk to you again really soon. Tech Stuff is an

(36:10):
iHeartRadio production. For more podcasts from iHeartRadio, visit the iHeartRadio app,
Apple Podcasts, or wherever you listen to your favorite shows.

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.