All Episodes

September 20, 2024 37 mins

Israel is allegedly behind an attack on Hezbollah involving exploding pagers. How did that happen? In other news, Microsoft wants to reopen a nuclear power plant to supply electricity to power-hungry AI data centers. Plus much more! 

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tech Stuff, a production from iHeartRadio. Hey there,
and welcome to tech Stuff. I'm your host job and Strickland,
I'm an executive producer with iHeart Podcasts. And how the
tech are you? It's time for the tech news for
the week ending September twenty, twenty twenty four. The US

(00:27):
Federal Trade Commission or FTC, issued a release this week
titled well, actually it's a really long title. Essentially, this
release says social media and streaming companies are essentially conducting
surveillance on both their users and on non users while
also maintaining inadequate data security and privacy controls, Which I mean,

(00:51):
if you're someone who has even paid just a little
attention to these companies over the last several years, you
might respond to this news as oh, yeah, yeah yeah. Also,
by the way, water is wet. At least that was
my initial reaction, because to me, the revelation that these
companies are one collecting massive amounts of data about people

(01:12):
and two they're pretty crappy about keeping that stuff safe,
that's not exactly news. But then the FTC was conducting
an investigation to determine exactly the scope of data collection
as well as to what extent, if any, these companies
are taking to you know, protect private information. These are

(01:32):
the sorts of things that you got to do if
maybe further down the line, you decide to you know,
pressure companies into making changes or pressuring the government into
passing laws that will require companies to make changes. Now,
perhaps by by finding the ever loven socks off these
companies a guy I can dream anyway. The release detailed

(01:54):
how these companies collect information not just about their own users,
I mean that is evident, but also on people who
aren't users at all. And let's just say, how, you know,
how could that happen? How could someone who's not a
user have their data collected by these companies. Well, let's
give one simple example. Let's say you gut Uncle Joey,
and Uncle Joey's not on Facebook, but you do post

(02:17):
about Uncle Joey a lot on Facebook, and you know,
Facebook now knows stuff about Uncle Joey. That's one way.
Another is that companies might get access to people's information
by dealing with data brokers. Data brokers are companies that
buy and sell personal information across the Internet, and a
data broker might have quite the dossier on you or

(02:38):
on Uncle Joey, and that information has gathered from multiple
sources and organized into handy dandy packets for any company
that really wants to know you know, like what kind
of breakfast cereal you like, or what kind of car
you drive, or which political party you support, or whether
or not you like sports that kind of thing. The

(03:01):
FTC found most of these companies lack sufficient data security
and retention policies, meaning that some of these companies retain
personal information indefinitely. So back in the corporate world, there
are often rules that mandate that companies destroy older documents
as time passes. I worked for a consulting company once
upon a time, and they had strict rules on how

(03:23):
long we could retain files on our customers, for example,
and once that time was up, we were obligated to
delete electronic files and shred hard copies. This was for
the protection both for the customer and for the consulting
firm itself. But these social network companies and social streaming sites,
they often don't have that kind of a policy, and

(03:45):
so the information to have on people can stay locked
away and grow over time. The report or just Congress
to pass laws that will give citizens more data security,
maybe even give citizens a little bit of agency when
it comes to how their own personal data can be
collected and used. Wouldn't that be nice? And further that

(04:06):
Congress should prioritize protecting younger Internet users in particular, who
often end up being exploited due to massive loopholes and systems.
We talked about this recently about how companies like Google
through YouTube were allegedly working with advertisers to target teenagers,
even though that's expressly against the rules, because what they

(04:29):
could do is say, well, let's just target unknown accounts
where we don't have an age associated with the account holder,
but the account has behaviors that are typically associated with, say, teenagers, right,
So you're not targeting teenagers. You're targeting people who behave
like teenagers and you don't know how old they are,

(04:50):
So how could you be blamed for targeting teenagers? That's
the allegation that's going around regarding YouTube. So loopholes like
that provide huge op oportunities for these companies to exploit
people that otherwise are meant to be protected from that
sort of thing. So the FTC actually voted with unanimous
support to issue this report I think that's pretty refreshing

(05:12):
for an agency to say, yeah, we're all aligned on this,
we all think that this is important, and you don't
get that division you typically get with these where out
of just principle members will vote against a measure because
unity is something to be avoided at all costs. Enough
with that commentary. One huge story that unfolded over this

(05:35):
past week involved pagers or what we used to call
beepers when I was a kid, and I'm sure you've
heard the disturbing and grizzly details. So this week pagers
in Lebanon and Syria began to explode and they killed
or at least a dozen people and injured thousands in
the process. The following day, another series of explosions rocked

(05:57):
the area, this time from walkie talkie hands sets blowing up.
The operation is believed to be a semi targeted attack
on Hesblon that was orchestrated by Israeli spies. Now I
say semi targeted because it's pretty darn hard to guarantee
a device like that it's going to be on the
actual target that you have in mind, or it's really

(06:18):
impossible to guarantee that that target isn't also near innocent
civilians when the device actually does explode. In fact, reportedly
some of the casualties have been children, which is obviously
it's truly heartbreaking. It's a horrible thing. We do have
some more details courtesy of the New York Times. The article,
which is titled how Israel built a modern day trojan

(06:41):
Horse exploding Pagers gives more details as to what actually happened.
So the pagers contained a small amount of explosives, and
that explosive would activate upon receiving a signal. That signal
was a message that was sent out at three point
thirty pm in local time. The New York Times article
reports that Israel has been planning this operation for a

(07:02):
long time. HESBLA, which obviously has been in a violent
conflict with Israel for decades now, has more recently attempted
to migrate members to lower tech communications solutions. That's in
an effort to remain undetected by Israeli agents. The argument
was that smartphones and things like that they were helping

(07:25):
Israeli military and spies target people in HESBLA, So in
order to avoid that, switch to lower tech stuff that
doesn't have the same capacity for being tracked. That was
the concept. Israel, however, knew about this and apparently took
this opportunity to build such devices with explosives directly incorporated

(07:46):
into them, and then essentially fed these products to Hesbela
agents through third parties. So Israel just made sure that
these other companies had these explosive devices, and then you know,
when when Hesbela agents were looking for these low tech
communications devices, the ones they got were the ones that

(08:07):
were built by Israel. The attack sounded like something that
would be in a far fetched action movie, you know,
something you might see in like one of the Kingsman
films or something. And I cannot imagine how horrifying and
terrifying it must have been to have been present when
those explosions began to happen. It's a ready sobering, i

(08:28):
mean beyond sobering thing to think about, and kind of
terrifying really, just that this was a coordinated effort that
was extremely successful in hurting a lot of people. Now,
whether those were the quote unquote right people, I don't know.
I'm generally against people hurting each other at all, so

(08:48):
I'm not big fan on Hesbela carrying out acts of
violence against people in Israel. That's horrible. Not really keen
on Israeli military and spies carrying out acts of violence
on people in Lebanon in Syria. That's horrible. It's all terrible.
That's really what I get down to it. Over in Europe,
Reuter's reports that major tech companies are trying to convince

(09:09):
EU leaders to not go so heavy handed with regard
to upcoming laws and regulations relating to artificial intelligence. Now,
you might recall that open aiy's Sam Altman started meeting
with various political leaders like a couple of years ago. Now,
those were in early discussions about proposed AI regulations, and

(09:31):
Altman reportedly was calling four rules. He was saying, yes,
we should have regulations for AI. However, critics accused him
of trying to lay a foundation that would really benefit
open ai while simultaneously suppressing potential competition from smaller companies,
and all the while, the actual rules that were created
would do very little to protect the citizens of the EU.

(09:56):
Even if the tech companies failed to persuade legislators to
use a lighter touch when it comes to creating AI policy,
they don't have too much to be worried about, at
least in the short term, because according to Reuter's the
rules that are being discussed now, once they are adopted,
they aren't actually legally binding. They would be, as Captain

(10:17):
Barbosa might say, more like guidelines also a happy belated
talk like a pirate day. I'm a day late on
that one. Anyway, if a company is found to ignore
or violate guidelines, particularly multiple times, then it could potentially
face a more serious challenge from regulators in the future,
which ultimately could end up with like fines and that

(10:40):
kind of stuff. The issues at play here include everything
from how companies will gather data to train their AI
like data scraping. In other words, there are concerns about copyright,
like can AI companies indiscriminately train models on information that
otherwise has the protection from such use. Transparency is a
really big part of it, because right now AI largely

(11:01):
follows the black box model of transparency, meaning there is
no transparency. Stuff goes in, other stuff comes out. We're
not allowed to see what process happens in between those
two things, that kind of thing. Reuters reveals that many
major companies, including Google and Open Ai, are lobbying to
be included in working groups that are meant to shape policies,

(11:22):
which again gets back to that issue I talked about
a moment ago. When the folks who are meant to
be regulated are allowed to shape the regulations, what do
you think is going to happen. Who do you think
that's going to ultimately benefit? Here's a hint, it's not
you or me. Okay, we've got more tech news to
get through, but first let's take a quick break. The

(11:50):
United Nations has issued a report that recommends the organization
of a committee that would be in charge of governing
and monitoring the development and ployment of artificial intelligence around
the world. The report suggests that this committee would have
a similar structure and status that the Intergovernmental Panel on
Climate Change possesses. The recommendations aren't all about putting AI

(12:15):
in the spotlight as a potentially dangerous, perhaps catastrophically dangerous technology.
It's also to kind of decide how to leverage AI
in ways that provide the most benefit while potentially creating
the least potential for harm. How to ensure that poorer
countries are able to realize the benefits from AI while

(12:36):
also having a seat at the table when it comes
to governance and thus not be left behind, not to
create an AI gap. In other words, as we have seen,
artificial intelligence is really complicated, right, It's not just good
or bad. There's a lot of complex stuff going on here,
and it's far too easy to fall into simplifying things

(12:58):
beyond reas I've done that personally. I mean in just
in an effort to try and communicate stuff, I have
oversimplified the problem. I recommend Will Knight's article over on Wired.
It's titled the United Nations wants to treat AI with
the same urgency as climate change. Knight does a great
job explaining not only the recent report that the United

(13:22):
Nations issued, but also the existing regulations that are already
in place that could potentially shape this intergovernmental body as
it takes form. So check that out. Emily Bernbaum and
Oma Sadek of Bloomberg have a piece that's titled Microsoft
executive Warrens of Election Meddling in Final forty eight hours.

(13:44):
So the referenced executive is Brad Smith, who told the
Senate Intelligence Committee here in the United States that foreign
backed campaigns aimed to interfere with US elections are likely
going to peak two days before the election itself. Examples
that have happened in other countries in which deep fakes

(14:04):
of various candidates in those elections made the rounds just
before their election day. We've already seen some examples of
AI generated material relating to the election here in the
United States, though for the most part it hasn't actually
made a huge impact, although one example did prompt Taylor
Swift to make a statement about voting and also the

(14:25):
candidate she personally supports, partly because an AI generated image
of her appearing to support Donald Trump got some buzz,
especially after Trump himself boosted that signal, and Swift's response
has inspired a few hundred thousand citizens here in the
US to register to vote. So that's actually a really
good thing, you know. It's participation in democracy is actually

(14:45):
absolutely vital for democracy success. But yeah, we're likely to
see a lot more of that leading up to election day,
according to Smith, and that sounds like it tracks to me.
We've already seen multiple stories about how countries like Iran
and Russia have attempted to shape the election to varying
degrees of success here, especially this past year, So for

(15:06):
that to escalate seems like it's a pretty safe bet.
Unsafe consequences, but a safe bet that it's gonna happen.
Simon Sharwood of The Register has an article titled LinkedIn
started harvesting people's posts for training AI without asking for
opt in. And again, I think that probably doesn't come
as a surprise to most people. I mean, lots of

(15:28):
platforms have done this, where they started to scrape their
own platforms for user information to train their AI models
without notifying users, let alone asking them for their consent
for this to happen. But while it might not be surprising,
it's still very upsetting to a lot of people who
likely did not anticipate that their online resume and their

(15:51):
various posts promoting their work or their companies would ultimately
serve as training fodder for AI models. Ashley Bellinger of
Ours Teneca has a related piece to this. It is
titled how to stop LinkedIn from training AI on your
data Now. Bellinger points out right away that there's no
way to opt out of any training that has already happened.

(16:12):
You cannot say, oh, you know what, remove all my
stuff from your training model's brain. That's no go. It's
that cat is out of the AI bag already, so
there's no way to protect your data that has already
been used to train LinkedIn AI models. However, there are
some somewhat limited ways to opt out of future training.

(16:37):
LinkedIn will issue an updated user agreement related to this,
and at that point users will be able to opt
out of future training sessions. To do that, you will
need to go to your data privacy settings and look
for a bit that's related to data collection for quote
unquote generative AI improvement and make sure that that option
is turned off. Now. If you are in Switzerland or

(16:59):
if you in the EU, where the law by default
requires LinkedIn to secure your consent before opting you into
this data collection program in the first place, you don't
have to worry about this. You will be able to
actually respond no, don't do that when you are prompted.
The rest of the world we don't get that treatment.

(17:19):
But hey, I think that's a great example to bring
up when you're pushing US Congress to adapt stronger data
privacy laws, don't you think. In California, Governor Gavin Newsom
signed bills into law that are designed to protect actors
from predatory practices that involve you guessed it AI. So essentially,
these laws create rules that media companies are going to

(17:42):
have to follow if they are to create a digital
replica of an actor or a performer. One of the
two laws says that companies must have quote contracts to
specify the use of AI generated digital replicas of a
performer's voice or likeness, and the performer must be professionally
represent negotiating the contract end quote, So no just sneaking

(18:04):
that in. It has to be more transparent than that. Now.
The other law requires media companies to first acquire the
express permission from the estates of deceased actors before media
companies are allowed to create digital replicas of the dearly
departed actor in question. So, in other words, you can't

(18:25):
just go and create a digital replica of Clark Gable
to be in your movie without first acquiring legal consent
from Gable's estate. And this relates closely to issues that
sag Aftra brought up while they were negotiating new contract
agreements with the movie studios. That's what ultimately led to

(18:45):
the union strikes. Not that long ago. Have you ever
been on a social platform like x formally known as
Twitter and you posted something that you thought was amazingly
insightful or really funny, or just incredibly relevant to the
world and what's going on or whatever. Only then you

(19:07):
absolutely got no engagement whatsoever, you know, no likes or responses, nothing.
That's a real bummer, right, I mean, here you are
spitting gold out into the universe and you're getting nothing back.
Wouldn't it be nice to receive lots of really positive responses.
Maybe people are riffing on what you said and pointing

(19:28):
out other things relating to what you were saying, and
creating a real conversation around it. Well, maybe social ai
is for you then, though I don't think so. You know,
social ai is only for you if you do not
mind that all the responses you get are generated by
AI bots, because that's what social ai is. It's an

(19:51):
app that I argue, mimics a social platform because everyone
else on your instance of this app is a bot,
Like you're the only human left on Earth and everything
else is run by AI. You can select the types
of followers that you'll get responding to your posts. You know,

(20:13):
maybe you want people who are funny. Maybe you're looking
for nerds to respond to your stuff. Maybe you're looking
for insightful observations drawn from what you're posting. I don't
know how much mileage you'll actually get out of any
of those, because I've actually been I've been reading some
of the responses that various reporters have used or have

(20:35):
posted after they tried out social Ai. They created some
posts and then they posted the responses they were getting,
and all those responses strike me as hollow and lacking
any substance whatsoever. Like to me, it reminds me of
being in school and you're in English class and you're
supposed to give a book report, and one by one,

(20:56):
students are going up to give their book reports, and
it's very clear who read the book versus someone who
only read the back book cover, and they're just trying
to stretch that out long enough to make it seem
like they read the whole thing. The responses I read
and social ai made me think of the students who
just read the back book cover and didn't actually do
the work. The developer behind social ai is a guy

(21:18):
named Michael Samon, and I don't know why he made
social ai. Honestly, I don't know if this was kind
of his way to lampoon how meaningless a lot of
online engagement ends up being. Like if you've ever opened
up threads on Threads or on x or whatever, if
you just read like the comments section under posts, often

(21:39):
you come across a lot of stuff that surface level
is being too kind, there's no depth whatsoever. I don't
know if Saman was poking fun at Elon Musk for
buying Twitter and finding out what it's like to have
an entire platform filled with bots. You know, Musk had
said before he bought Twitter that Twitter was infested with

(22:00):
bots and he was going to clean it up. And
I think a lot of people now argue that X
has bore bots or a higher concentration of bots than
Twitter ever had before Musk took over. Whether that's true
or not, I don't know, but certainly the perception or
I don't know if Saman was doing this in a
sincere effort to provide comfort to lonely people who otherwise

(22:23):
get very little interaction from others. Right Like, if it's
someone who just feels like they're saying things and no
one's hearing them, then that can lead to a pretty
despondent day to day existence. So having something that makes
you feel heard and seen and validated. That could have
real value to some people. And maybe that's why Saman

(22:45):
did this. It's hard to say, because he has given
various statements that support each of those motivations, and you know,
maybe he's motivated by a mixture of things, or maybe
the motivation has actually evolved over time, where maybe maybe
it started either as a joke or as a sincere
effort to help people and then slowly evolved into the other.

(23:06):
I don't know, but whatever the motivation is, I can't
say that I'm impressed with the quality of the engagement
you get when you post stuff to social AI. It
does actually make me think of Threads a lot, because
whenever I do log into threads, I get the distinct
feeling that a lot of the accounts that are being

(23:26):
promoted to me are actually being driven by AI of bots.
I mean, maybe the account is held by a real person,
but it's an AI bot that's actually posting the stuff,
and it's all in an effort to drive engagement. That's
the feeling I get because there are just too many
accounts that are all asking essentially the same questions, right,
and they tend to be questions that prompt a quick response,

(23:50):
especially if you're linking it to a specific region, like Hi, Austin, Texas,
what are some great restaurants to look at? Like something
like that, Right, that's going to get a lot of responses,
at least in Austin and probably the surrounding areas, and
it drives a lot of engagement, which, in turn, engagement
is like currency for influencers. Right. So, I get the

(24:10):
feeling that a lot of the posts on threads aren't
being made by actual people. They're made by bots that
are just trying to get as much engagement as possible,
and a lot of that stuff ends up being, you know,
pretty simple tricks, to the point where I very rarely
respond to anything in Threads unless I actually know the
person who's posting and I feel like, oh, that really
came from that person, not this is something that some

(24:33):
bots spat out in an effort to make number go up. However,
it's possible I'm just getting paranoid. Maybe I'm just paranoid
and I believe everybody's a bot and I'm just positioning
myself to audition for the next version of the thing,
except it'll be robots, not you. Know Aliens. Maybe that's
the case. I'm going to take another quick break. When

(24:54):
I come back, I've got a little bit more tech
news to share with y'all. Okay, we're mostly done with
AI at this point. That was an awful lot of
news about artificial intelligence. But to be honest, that was

(25:14):
what was really dominating a lot of the tech conversation
this week. I mean that, and then obviously the Israeli
attack against Tesbolah using pagers and walkie talkies as explosives.
Those were like the two big things that were in discussion.
But let's talk about X slash Twitter for just a

(25:35):
couple of seconds. So this first one's an update. You
might recall that a Brazilian Supreme Court judge ruled that
the internet service providers in Brazil were to shut down
access to x. This was after Musk refused to play
ball regarding the removal or censorship of certain accounts on

(25:56):
the platform in Brazil. If you're not familiar with the story,
so the Supreme Court justice was telling Musk, hey, there's
this group of accounts that are spouting off misinformation and
hate speech in Brazil and those posts are causing harm
either directly or indirectly and therefore we want you to

(26:19):
shut down those accounts. And Musk's response is, no, we
believe in free speech, which Musk believes in free speech
when it suits him. If it's free speech that is
critical of Musk, he's not as much in favor of that, honestly.
But anyway, he said, we're not going to do that.
We're not in the business of shutting down accounts just

(26:40):
because they say things you don't like. And the judge said,
all right, well, then what we're gonna do is we're
gonna shut down access to X in Brazil, and they did.
But this week on Wednesday, X briefly returned into service
to Brazil, not because the Supreme Court in Brazil allowed

(27:00):
it to, but rather they were able to circumvent the
issues by using like third party cloud services to return
service in the country. So the Supreme Court judge did
not just let that go unnoticed. The justice is said,
if X remains active in Brazil despite the ruling against it,

(27:22):
then Brazil will levy a nine hundred thousand dollars a
day fine for every day that X would remain accessible
within the country. As you might imagine, X is now
once again down in Brazil. So the battle between Brazilian
judges and Elon Musk continues now again. Y'all, if you've

(27:45):
been listening to the show for a while, you know
I am not a big fan of Elon Musk. I
do not care for him at all. However, I also
don't think that censorship is a reasonable approach either. I
think content moderation is important, and I think that's something
that X has really, really really fallen short on. Twitter

(28:09):
was never good about content moderation, but X has said
they'll hold my beer and has taken that to the
nth degree. So I do think that X requires better
content moderation, which supposedly is something that's actually in the works.
But I don't know that that censorship is the right

(28:29):
answer from like a governmental source. So I think this
is a story where I don't agree with any party
that's involved in it. I mean, I don't value X
as a service anymore. I personally do not like I
don't use it anymore. I deactivated my account ages ago.
But I at the same time recognize that for a

(28:49):
lot of people it serves a really useful purpose and
I don't want to see that just get tossed aside.
So this is a complicated situation where I don't think
anyone is really in the right and the people who
are suffering are the ones who are being driven to
stuff like blue Sky and Threads. Not that blue Sky

(29:10):
is terrible, but Threads is pretty bad. I've been using
Threads for a bit, and I'm like, why did I
bother doing this? I think for about a month, I've
been using Threads again, and I question each time, like
am I going to come back and use it again today?
Or is this it? Am I done? Anyway? Enough of that,
let's move on. So here in the United States, X

(29:30):
has also officially relocated its headquarters. It originally was headquartered
in San Francisco, California. Now it has moved to best Drop,
Texas Bastrop, Texas. I'm sure I've mispronounced that name, but
that's because Texans decide to pronounce things in their own
peculiar way. Like in Austin, there's a street that's spelled Guadalupe,

(29:54):
but if you say Guadalupe, you'll be laughed out of
the city because it's Guadaloup. I can't really criticize that.
Here in Atlanta, we have a street that's called Ponce
de Leon, but it's pronounced Ponce Delian, Like you got
to say Ponce Delian or people won't know what you're
talking about. So anyway, that's beside the point. The move
from California to Texas had a lot of political motivation

(30:16):
behind it, Namely, Elon Musk has clashed multiple times with
California's political leadership on numerous occasions over lots of different topics,
including recently here, the state of California passed laws that
meant to protect transgender students from being outed to their
parents without their consent, and Musk apparently can't abide that

(30:39):
kind of thing. He just is like, no, that's not cool.
I want people's lives to be put into danger. I
suppose if you'd like to learn more about this relocation
from California to Texas, I recommend Andrea Guzman's article in
the online paper Krawn c Hron. That article is titled

(31:01):
Elon Musk officially moves X headquarters from California to Texas.
That'll be helpful. It also explains that the folks in
California don't know yet if they're going to be required
to relocate to Texas. Or if other offices in areas
around San Francisco, such as like San Jose will remain open.

(31:21):
They don't know yet. And a lot of Elon Musk's
businesses are now headquartered in Texas, where he finds a
lot more political parody with the folks in charge in
that state. So how much power does AI need as
in electricity? We're not totally done with AI after all,

(31:43):
it seems well, apparently they need enough electricity to require
a nuclear power plant that had been shut down to
come back online, because Microsoft has signed a deal with
Constellation Energy, and that deal will require the restart of
Unit one of the infamous Three Mile Island nuclear power

(32:06):
plant in Pennsylvania. Now, way back in the nineteen seventies
here in the United States, Three Mile Island became the
focus of national news when Unit two had a partial
nuclear meltdown that also included the release of some radioactive
gases and radioactive iodine into the surrounding area. That unit

(32:28):
is not going to be reactivated for obvious reasons, So
Unit two is not like going to go from partial
meltdown to back in action. Unit one will be restarted now. Also,
to be clear, Three Mile Island did not totally shut
down after this partial meltdown, it did come back online
in a reduced capacity and had remained in operation until

(32:52):
twenty nineteen when it shut down. Now this deal is
going to have it come back online by around twenty
twenty eight, and this is in order to supply Microsoft
with that sweet, sweet lightning juice that's needed to power
all of Microsoft's various robots where it's AI efforts. In
other words, now this really reminds me of the fact

(33:12):
that we're always going to need access to more energy,
and we'll use all the available sources that we have.
Because one of the big talking points about fusion is
that if we can get fusion to work, then we
would meet the world's energy needs pretty handily. In fact,
we would meet them many times over, the idea being

(33:35):
that we would have plentiful energy, prices would drop, and
we wouldn't have to worry about deficits at all. But
I think AI kind of proves that we always will
come up with more ways to require more energy. The
word enough just doesn't come into it. There will never
be enough, because we'll just find new ways to require more,

(33:56):
which is a sobering thought and one that we need
to remind ourselves when we get carried away, like news
that relate to things like fusion. For example, I get
really excited when I read about advances in fusion because
it's a super interesting technology. It has the capacity to
provide a lot of energy with very little downsides to it,

(34:17):
apart from the fact that it's really hard to get
it to work right and we haven't done it yet.
But like, if we do get to work right, it
could be phenomenal. But then stories like this remind me
of Yeah, but we are going to have stuff like
cryptocurrency minors and AI language models and all this kind
of stuff that just are incredibly hungry for electricity, So

(34:40):
we will find ways to consume all that excess that
we have produced fun World. Finally, Disney is taking out
the Slack, and by that I mean Disney plans to
transition off of the Slack collaboration tool after hackers manage
to access more than a terabytes of corporate information from

(35:02):
inside the Mousehouse and then they leaked that data, or
at least some of it online. The leaked information included
stuff about unreleased projects from Disney Entertainment and like included
more than forty four million messages between Disney employees or
staff or I don't know, cast members, whatever. I haven't
seen details as to how precisely the hackers got access

(35:25):
to Disney's Slack channels. I would gently remind leaders that
it is really important that whatever platform you choose for
project management or collaboration, whatever it might be, it is
important that that program is secure. But even the most
secure system is not going to protect you unless your
employees learn and follow good security etiquette. Otherwise, you can

(35:50):
swap out tools until the cows come home, and hackers
will still find someone to exploit in order to get
at the goods. So I'm not saying that Slack is perfect.
I don't use Slack personally. I just think that throwing
Slack under the bus is probably short sighted, unless, of course,
the hackers did exploit an actual vulnerability in Slack, which

(36:12):
is entirely possible. The information I came across was not
clear about that one way or the other. If the
hackers were able to exploit a vulnerability in Slack itself,
that spells trouble for all of Slack's users at Disney
or otherwise. So that's a much, much, much bigger issue if, however,

(36:32):
this was a case where they were able to get
access to Disney's Slack channels, not because Slack itself had
a vulnerability, but because someone within Disney accidentally handed over
the Keys to the Kingdom, which, by the way, Keys
to the Kingdom is a great behind the scenes tour
at Walt Disney World. That's not really what I was
talking about, but yeah, if that's the case, it doesn't
matter what platform you're on. I mean, people are still

(36:53):
prone to getting tricked by things like social engineering. So
just a reminder for everyone out there that getting the
so called best tool doesn't always guarantee success. Okay, that's
it for the news for this week. I hope all
of you are well, and I'll talk to you again
really soon. Tech Stuff is an iHeartRadio production. For more

(37:22):
podcasts from iHeartRadio, visit the iHeartRadio app, Apple Podcasts, or
wherever you listen to your favorite shows.

TechStuff News

Advertise With Us

Follow Us On

Host

Jonathan Strickland

Jonathan Strickland

Show Links

AboutStoreRSS

Popular Podcasts

2. Dateline NBC

2. Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations.

3. Crime Junkie

3. Crime Junkie

If you can never get enough true crime... Congratulations, you’ve found your people.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.