Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:03):
Hi everyone, I'm John C. Morley here the host of
The JMOR Tech Talk Show and Inspirations for
Your Life.
(00:23):
Well, hey,
(00:47):
guys, welcome to the JMOR Tech Talk Show.
It's great to be with you.
If you are new here, well, welcome, it
is so great to have you here.
If you're coming back, well, welcome back.
It's so great to have you back again
on the show.
We are the show that gives you the
insights about technology around the globe and things
that you need to know about if you're
(01:09):
using technology and living in the world.
All right, guys, we have an amazing topic
for this week, and that is tech tensions
and breakthroughs from antitrust battles to AI deals.
And there's a lot that we need to
go through here.
By the way, if you haven't checked it
out, well, please visit BelieveMeAchieve.com for more
(01:30):
of my amazing, of course, inspiring creations, which
you can do 24 hours a day, either
anytime after the show or anytime at night.
It's totally up to you.
We've got lots of great information.
Hey, guys, if you're hungry or you're thirsty,
head on over to your kitchen real quick.
I've got my fresh RO water here, so
I'm not dehydrated.
Make sure you get yourself something delicious.
(01:51):
Maybe it's something hot.
Maybe it's something cold.
Maybe it's something sweet or tart or not,
healthy or not.
That's up to you.
And hurry on back, guys, to The JMOR
Tech Talk show.
All right, guys, let's kick this off because
I have so much that I want to
share with you.
Again, I'm John C.
(02:12):
Moorley.
If you didn't know, I'm a serial entrepreneur.
I'm an engineer, national talk show host, lifelong
tech enthusiast, and passionate person for making changes.
And I'm also a podcast coach and a
video producer.
And every week, guys, I peel back the
layers of the onion to the headlines for
(02:33):
revealing what's really happening in the world of
tech beyond the press releases and, of course,
the big politics.
On the Jay Moore Tech Talk show, I
really dive deep, that is, into sound bites
that will affect your life, whether it's AI
deals shaking up journalism, rockets gone rogue in
orbit, or government crackdowns on surveillance tech.
(02:57):
You'll get the context here.
You'll get the clarity here and the commentary
that matters most.
So this week, yes, we've got everything from
high stakes legal fights to interplanetary delays.
So buckle up.
It's going to be a little bumpy tonight.
And get ready for one of our most
(03:17):
packed episodes yet.
All right, let's just kick right in, all
right?
The first thing I want to talk about
is Google.
I haven't talked about them in a little
while.
But Google is deciding to challenge the antitrust
ruling.
That's right.
Google has officially decided to appeal a federal
(03:38):
antitrust decision that has accused it of unlawfully
monopolizing online search.
Now I think that's something that a lot
of people think is kind of a little
crazy, and I have to tell you, it
definitely is.
Google's done a lot of things wrong, and
not to get into that tonight, but they've
got their hands stuck in a cookie jar
(04:00):
more than one time.
And it's time, ladies and gentlemen, to pay
the piper.
They feel that the federal antitrust decision accused
it of unlawfully monopolizing online search and advertising
markets.
I firmly believe that they did do that.
Getting vendors to make sure that their search
(04:23):
engine showed up by default.
I mean, come on, guys, really?
Regulators allege that Google's multi-billion-dollar agreements
with companies like Apple unfairly cemented its dominance
by setting it as the default search engine,
as I said, on billions of devices.
Now, while the Department of Justice pushes for
(04:44):
sweeping remedies, including the breakup of certain business
units, Google insists the court misunderstood the competitive
landscape.
Interesting.
As generative AI reshapes searches, the behaviors, and
disrupts the very foundation of how we interact
with the internet, Google is betting that innovation
(05:06):
will be its best defense.
Well, if that's all they're betting on, I
think they better do a little more homework
because I got to tell you guys, that's
not enough, and it wouldn't impress me.
So you better sharpen your pencils there, Google.
And number two, Microsoft is doing something bold.
(05:29):
Microsoft has planned to exit Russia with a
bankruptcy filing.
The Microsoft basically has confirmed that its Russian
subsidiary, Microsoft Russ LLC, will file for bankruptcy,
a symbolic but a very significant step in
its gradual exit from Russia.
Now, following the invasion of the Ukraine and
(05:51):
intensifying Western sanctions, Microsoft had already scaled back
most of its operation in the region.
The Kremlin's increasing hostility toward foreign tech firms,
including attempts to favor domestic alternatives, made continuing
business untenable, they claim.
Microsoft joins a long list of global tech
players, including Google and Zoom, that are either
(06:12):
pulling out or being pushed out of Russia's
increasingly isolated tech landscape.
Apparently, Putin feels that there is no room
for any of these services or products in
his world, and he's doing everything to make
sure that they are unwelcome.
And now Russia's trying to come up with
(06:33):
their own services, like their own type of
Zoom or Teams or things like that, because
they feel they can control it.
Maybe they're afraid of the data, who knows?
But I think the real reason is about
power and control.
And by them doing this, they can actually
spy on the people using it right there
in Russia.
(06:54):
I know I wouldn't use Russia's software, but
that's just me.
So we will keep an eye and an
ear on everything that's going on in Russia
and hopefully give you some insights as soon
as we know about them.
Ladies and gentlemen, Zoox issues second robo-taxi
recall after a crash with Amazon's autonomous vehicle
(07:16):
division.
Now Zoox, in case you didn't know this,
has recalled 270 of its self-driving cars
after a collision involving a scooter in San
Francisco.
The incident, which left the scooter rider with
minor injuries, revealed a flaw in the system's
pedestrian detection.
I will say Zoox responded with a software
(07:39):
update designed to prevent the vehicle from moving
with a person and knowing that they're nearby
and pausing operations while conducting internal tests.
Now, though no passengers were involved and the
vehicles are not yet in public use, the
crash raised some serious questions about the readiness
(07:59):
of autonomous vehicles for real world environments.
And I've been saying this, guys, over and
over again, the reason that people want to
do this is because of the money.
It's about money.
It's about power.
It's about control.
I mean, that's the real reason that they
want this.
It's not for convenience.
It's not to make people's lives easier.
(08:21):
It's so they can get more control.
That's the truth.
So they can get more control.
And I think it's really crazy how this
is going.
But, you know, they're trying to make it
sound like it was just something minor.
But if it was in a major, let's
(08:44):
say, intersection and more than just one person
got injured, I think it would be pretty
much 86 off the platform removed.
But there's a lot of people that are
doing things and they don't really understand why
things are happening and know that it's important,
(09:09):
guys, to understand what's happening.
And, you know, when we talk about what's
happening or, you know, what's going on, I
think, you know, that's really, really important because
(09:30):
people do things for basically one reason, actually
two reasons to avoid pain and gain pleasure.
That's why the hospitals are so busy, because
they want to basically get out of pain.
(09:51):
But would it be better if people did
things to actually get more into pleasure?
That'd be an interesting concept.
But the world doesn't think like that.
And so all these different companies think that
autonomous vehicles are ready for the world.
So the question you might be asking me
is when will the world be ready for
(10:13):
autonomous cars and vehicles?
Well, there's no definitive date for a fully
autonomous car market.
And many experts are predicting that self-driving
technology, what we call, quote unquote, level five,
will likely be widely available by 2035.
That's 10 years from now.
(10:35):
More realistically, widespread adoption of advanced autonomy, level
two and three, is projected for around 2030,
with robo taxis operating at scale in numerous
cities.
So you might want to know what the
heck is the difference?
Sure.
Well, level two and two plus is advanced
(10:56):
driver assist systems.
These systems already offer features like lane keeping
assist.
I have that in my vehicle.
Adaptive cruise control, which I love.
And autonomic systems, as well as automatic emergency
braking.
I have had that in my last two
vehicles, and it's pretty amazing.
Sometimes the brake kicks in when there's no
real reason for it, but I'd rather have
(11:17):
a false positive like that, preventing a type
of accident.
They are likely to become dominant in cars
by 2030, these two technologies.
Now, level four is limited autonomy.
This level allows for autonomous driving in specific
conditions, such as on highways or in designated
areas.
Some experts suggest that level four robo taxis
(11:38):
could begin operating in urban areas by 2025
late.
I don't know about that.
Potentially offering by companies like Diddy, Uber, and
Lyft.
Now, level five is the big, big banana
level.
That is the ultimate goal of autonomous driving,
where the vehicle can handle all driving tasks
in any situation without human intervention.
(11:59):
However, achieving this level of autonomy is still,
well, a bit of a challenge.
And experts predict it won't be really ready
until 2035 or 2040, assuming we don't run
into any more types of problems.
All right, guys, I always said things are
about money, right?
And we all know that the New York
Times was very against artificial intelligence.
(12:22):
Well, here's something to float your boat on.
The New York Times, you ready?
Signs their first AI deal with Amazon.
In a landmark move.
And so, you know, the New York Times
said they were never going to support this.
But you know what?
(12:43):
When the money became sweet enough for their
pocket, guess what they did?
They changed.
They adopted to a new way.
That's pretty incredible.
And so, the New York Times has entered
this multi-year licensing agreement with Amazon, granting
(13:03):
access to select editorial content for use in
AI products like Alexa.
Now, this includes material from New York Times
Cooking and The Athletic marking the Times' first
official AI content deal.
Again, it was never about them not wanting
it.
It was the fact that they weren't getting
real money for it.
Now that they're sweetening the pot, well, it's
(13:26):
becoming a reality.
The question I have is, is the New
York Times actually paying a lot of their
people for this or not?
Because, you know, if you're an author and
you're writing to give something to the New
York Times, is there a new clause that
says they're allowed to give the content?
I know if I wrote for them, I
would want to get compensated, especially if my
(13:47):
work shows up in other systems like AI
that could be featured anywhere.
So, again, material that we're talking about right
now is New York Times Cooking and The
Athletic shows, but there could be more.
And so with media companies increasingly concerned about
copyright violations by AI firms, this partnership is
being closely watched and scrutinized.
(14:08):
I'm sure you know that.
The Times has ongoing lawsuits against Microsoft and
OpenAI, but appears to be strategically monetizing its
content through authorized channels as AI tools become
more embedded in everyday life.
And I think this is something that a
lot of people don't really understand.
(14:28):
They don't understand how things are working.
They don't understand that, you know, these big
companies that say no, it's never about no,
it's about we haven't gotten enough money.
So we're going to give you some lame
excuse.
And here's a story, ladies and gentlemen, I
just can't make up and no one can,
no matter how hard you try.
I mean, this is like pathetic, this one.
(14:51):
Crypto investor, basically a 37-year-old guy
was indicted in kidnapping because of a plot
for Bitcoin access.
This New York man had been indicted for
an extraordinary crime, kidnapping a business partner for
three weeks in an effort to steal his
Bitcoin credentials.
(15:13):
The alleged mastermind, John Woltz and an accomplice
reportedly tortured the victim using electric shocks and
violent threats to force access to digital wallets.
The ordeal unfolded in Manhattan and ended only
when the victim managed to escape.
This chilling bone-wrenching story just highlights the
(15:35):
whole point about how the anonymity and high
value of cryptocurrency can turn digital wealth into
a dangerous real world target, drawing attention to
the darker side of decentralized finances.
And so, you know, there's so much that
can go wrong with crypto, right?
If you forget that long string password, which
is more than just a few words, right?
(15:55):
It's like, what, 20, 30, 40 words.
If you forget that anywhere, well, guess what?
No one can help you and you just
lost all that money to an abyss somewhere
online.
So I think that's a big problem.
And ladies and gentlemen, the IDC does something
pretty cool.
They slash global smartphone forecasts amid the tariff
(16:17):
concerns.
And the International Data Corporation, which is what
this is, has sharply cut its 2025 forecast
for global smartphone shipments from a 2.3
% increase to just a 0.6%
increase.
Now, why do I say this?
I'm being a little facetious, okay?
Because we don't want things like this to
(16:38):
happen, but because of what's happening with Trump's,
you know, bans and tariffs, this is starting
to hurt commerce and industry, and not just
the United States.
Analysis cite that growing uncertainty from the US
-China trade tensions, inflation, and slowing consumer demand
(17:00):
is a problem, and it's going to get
worse.
Apple in particular is expected to see a
decline in shipments as tariff threats mount.
Meanwhile, Android sales are booing by subsidiaries in
key markets like China and India.
As the market slowly matures, many consumers are
(17:22):
holding on to devices longer or turning to
refurbished phones.
The outlook through 2029 remains cautious as tech
firms navigate economic and geopolitical minefields.
We talked about this before.
So I think this brings up a really
good question.
How long is, let's just say, an iPhone
designed to last?
(17:42):
If we were to look up, you know,
those specs.
And I think what we would find is
that an iPhone is typically designed, according to
most specs, four to eight years.
However, some users, like myself, I upgrade mine
almost every year.
And I'll tell you why in a minute.
(18:03):
So the technology in the phone, when you
use it a lot, is going to fail.
I have found on some phones that the
Wi-Fi starts to go.
So I never even give it a chance
to go.
I replace it before that, because if it
goes while I'm depending on it, that's a
real problem.
Now, things like, obviously, negligence and dropping your
(18:25):
phone and not having the proper case and
just being very neglectful of your phone.
That's a problem in itself.
But they claim, the industry claims that you
should be keeping your phone four years.
No way.
No way, guys.
Maybe two, maybe three.
But after one, two years, you start to
have issues with the buttons.
(18:46):
You start to have issues with lots of
things.
The screen and all kinds of stuff.
So how can you keep a phone that
long?
But this leads me to another very important
discussion.
You know, how long do most electronics last?
(19:06):
Computers, et cetera, and home stuff.
How much?
How long?
So it's a great question.
The lifespan of desktop computers is three to
eight years.
I generally recommend clients to start planning their
upgrade around the fourth year.
So by the fifth year, it's totally replaced,
because that's when some damage can start to
happen.
Components like hard drives, even though we have
(19:26):
SSDs, things can still fail.
And processors can degrade over time.
With proper care and occasional upgrades, a desktop
PC can potentially last longer.
But some people don't want to, you know,
spoon the money into it, especially if they're
not technically savvy, because they're gonna have to
pay someone like us.
Laptop computers generally have a slightly shorter average
(19:48):
lifespan, usually around three to four years.
Another source has told us that useful device
lifespans are about five years.
I tend to agree with that.
But if you're using the device as a
very, let's say, daily thing that you need
to have in your life, well, you're gonna
rely on it more.
Televisions, they're saying, will last eight to 10
years.
(20:08):
I believe that.
Smartphones, two to three years.
That's what the industry is saying.
What some companies were saying is four to
eight.
Apple would like us to believe that, right?
But that's not really true.
We know that.
Tablets, projected to be lasting seven years.
(20:30):
So, it varies on the type of device,
right?
You know, for example, how long do most,
let's say, vacuums last?
So, the average lifespan of a vacuum cleaner
is typically five to 10 years.
However, this can vary slightly.
The cord upright vacuums may last eight to
(20:51):
10 years, while cordless vacuums may last only
around five years.
Robot vacuums might last four to six years.
Proper care and canister vacuums could last 10
to 15 years.
That's a pretty big change there, right, guys?
And so, you know, how long do most,
let's say, refrigerators last?
(21:16):
Well, most refrigerators will last between 10 and
20 years.
However, this lifespan can vary depending on factors
like the type of refrigerator, its quality, and
how well it's maintained.
For example, built-in refrigerators can last longer,
potentially up to 20 to 25 years, while
standard models may last 10 to 15.
And this is according to the United States
Department of Energy.
(21:37):
So, on average, most fridges will last 10
to 20 years, but this depends on the
brand, as I said, the type, and how
well you take care of it.
High-end fridge brands like SubZero and Monogram
are built to go the distance, while Whirlpool,
LG, and Bayco typically last 10 to 15
years with proper maintenance.
Does it mean that it's always going to
last that long?
No, it doesn't.
(21:58):
What's something else?
How long do most, what's another technical gadget
around the house that everyone uses?
How long do most, let's say, hair dryers
last?
How long does a hair dryer last?
Typically, the cheaper hair dryers last between two
to three years, while the professional-grade hair
(22:20):
dryers will last six or seven years.
So, I think when you go to buy
something, these are important things to consider.
You know, like the brand.
When I'm picking a vacuum, am I going
to go straight to Dyson all the time?
Don't have to.
There are other great vacuums, but I think
for the, let's say, the rechargeable, portable vacuums,
(22:43):
you know, that you kind of carry around.
Dyson does make a good product, but I
have to tell you, I've had issues where
a vacuum that's not even five years has
had other things go wrong with it.
One time, my cleaning lady was vacuuming, and
she got water in the chamber, and we
had to go back to Dyson, and they
had to put a whole new motor in.
Just recently, we found out that our battery
(23:06):
is now not charging well anymore.
We just dump money into this thing.
So, I think you have to ask yourself,
you know, does it pay to fix it,
number one?
That's probably a big thing.
And then, think about how long is that
vacuum going to last?
But these things are not made to last
very well, unfortunately.
Even your own car, right?
Let's ask this question.
(23:27):
How long will the gadgets, I'm talking about
Bluetooth, Wi-Fi, radio last in your car?
I mean, let's talk about that for more,
right?
So, there are many factors that can influence
that.
(23:48):
And so, you know, you have to realize
that if a car lease is about 36
months, I found that things are going to
go bad.
When I had a, let's say it was
a 48-month lease, things started going bad
right after three and a half years.
So, having a three-year lease sounded like
a good idea.
(24:08):
And speaking about secrets, Victoria's Secret just experienced
a cyber attack, forcing a complete website shutdown.
The retailer, Victoria's Secret, has been hit by
a cybersecurity incident that disrupted their complete website
operations and some in-store systems.
The company quickly activated response protocols, brought in
(24:32):
external security teams and temporarily disabled key online
functions, while physical stores remained open.
The breach raised major concerns about how prepared
legacy retailers are for digital threats.
Victoria's Secret has not disclosed the nature or
the origin of the breach, but its stock
dropped nearly 7% because of the aftermath
(24:53):
and people feeling, hey, we're a little uneasy
and we don't really want to have our
money in a company like this that could
go down overnight.
We know what happened with M&S, but
M&S did a very good job responding
to the situation.
I think companies like M&S, companies like
Victoria's Secrets, yes, they're around for a while.
Yes, they have some money, but the problem
(25:14):
with a lot of them, they're run by
old school people and the old school people
don't want to adapt to understanding that you're
never 100% safe.
If you believe you are, you're going to
get in trouble.
Even being in the industry for a long
time, I know we have to constantly do
our own audits.
We have to make sure that we're not
doing stupid things or that we're not, let's
(25:36):
say, missing a potential vulnerability, right?
And Kaiser Permanente battles a major network disruption
recently.
Kaiser Permanente is experiencing ongoing network disruptions that
are affecting crucial systems such as electronic health
records, billing, e-visits, and even internal communications.
(26:00):
Now, while patient care continues through backup protocols,
many services, especially pharmacy and lab operations, are
seeing significant delays.
The source of the outage is still under
investigation, though the organization has not confirmed a
cyber attack.
As healthcare systems become more digitized, this incident
(26:23):
highlights how essential it is for providers to
build a resilient infrastructure that keeps changing and
evolving to follow and be ahead of the
current security.
And that's able to not only do that,
but that's able to be running through technical
(26:44):
failures.
That means are you in place to have
some type of a solution if the power
goes out?
Data centers, what do they have?
They have generators to kick on, right?
Some of them have solar.
And those generators that kick on, they're not
powered by gas, guys.
They're powered by diesel engines.
(27:05):
There's one in New Jersey, when I went
to go look at it, they have a
room.
And then they have like 12 generators.
And you can see it as you walk
through one part of the data center.
And I'm like, oh my gosh.
I said, how long could you guys run?
They said, we could run a whole seven
days if we lost power.
And now they're working on doing things so
they'll be able to run a month.
(27:26):
So these are definitely some interesting things that
are hitting the pike.
And I think more people are starting to
realize that it's not just a nice thing
to have technology, right?
We have to understand what our technology is
about, what it does, and what it doesn't
do.
(27:46):
And when we figure out what it doesn't
do, you know what that's time for?
That's time for us to start knowing that
we've got to do something.
And the reason we have to do something
is if we don't, well, we're going to
get hit.
We're going to get hit with a pretty
big problem.
A problem that could be, I mean, a
(28:06):
real big issue.
An issue that maybe is going to, well,
it could potentially cripple your business, right?
I mean, it could.
I'm not saying it does.
But it could cripple your business.
And I think those are important things to
understand.
And so I know that a lot of
(28:30):
you out there think because you spent so
much money that your infrastructure is impenetrable.
Well, I hate to burst your bubble.
But different technology, I'm not going to go
through some of them and how quickly they
become obsolete.
But there's certain types of technology that is
used for, let's say, making sure buildings stay
(28:52):
safe.
And that technology, a certain type of that
technology, basically needs to be replaced about every
five years.
The reason I say that is this one
particular system has a very unique subcomponent that
gets hacked all the time.
(29:13):
Again, I'm not going to get into the
specifics about it.
But just understanding that says to me, hey,
I need to have this as part of
my budget to keep upgrading.
It's like an item, like a commodity that
you run out of.
Now, instead of you running out of it,
you really run out of its trust factor
(29:36):
towards you.
And our friends at India are up to
something interesting.
Yep, India's new rule shakeups, the global surveillance
industry.
India is shaking up the 3.5 billion
surveillance tech market by mandating that all CCTV
vendors, foreign and, of course, domestic, submit their
(29:58):
hardware, their software, and their source to government
labs for security testing.
So officials say the policy aims to reduce
dependence on Chinese tech giants like Hikvision and
Dahua, citing national security risks.
(30:18):
While the move is framed as a defense
strategy, critics warn that it's slowly moving toward,
well, affecting the project timelines and impact for
costs and delivery.
Of course, delaying imports and risking billions to
corporations because contracts may be lost.
(30:41):
Now, I feel that's a small price to
pay.
If we're building our level of security, then
we need to buck up a little bit,
right?
The new rules signal India's intent to become
a self-reliant tech superpower, but not without
global fallout.
And I think so many people out there
that I talk to, they complain about, oh,
(31:03):
you know, this is this way, this is
that way.
But you've heard this before.
If you're not part of the solution, then
guess what, guys?
You're part of the problem.
Now, people don't like to hear that, but
that's absolutely the truth.
I mean, that is really the truth more
than you ever could imagine.
(31:23):
And how'd you like this?
How'd you like to go take a trip
for a few days, and then you find
out you're stranded for, oh, I don't know,
several months, eight months, nine months?
Well, that's exactly what happened to some astronauts.
NASA's astronauts ended an unexpected nine-month stay
(31:43):
in orbit, which was only supposed to be
a short little trip.
The two NASA astronauts, Sonny Williams and Butch
Wilmore, were slated for an eight-day test
mission aboard the Boeing Starliner, but ended up
spending nearly 10 months on the ISS due
to technical malfunctions.
(32:07):
Wow, reminds me of that trip where they
say the three-hour tour, Gilligan's Island, and
suddenly they just happened to just take over
the island.
The thing that gets me with that show
is, you know, they have everything set up
so right.
They build everything, but yet they have all
these clothes.
They have everything.
They got shipwrecked, but they have everything they
need.
(32:28):
It's, like, not realistic, right?
So I know one thing.
Hearing this, I would definitely never want to
go to space.
There's other reasons, too.
But the spacecraft's thrusters failed to operate properly
after docking, raising, well, some substantial fears about
whether a return to Earth was even possible.
(32:50):
Now, while the media was painting a very
doom-and-gloom picture that they'll be, quote
-unquote, stranded, NASA confirmed that backup operations were
always available.
Eventually, they returned safely via a SpaceX capsule.
Despite the hiccups, both astronauts expressed trust in
the program and readiness to fly Starliner again.
(33:14):
Well, they're definitely very brave.
And I think they were definitely coached.
I don't think they really want to fly
again.
Again, this is their job.
And this is why they're chucking it off
like that, right?
I don't know.
(33:37):
I think something needs to be more in
place.
And in case you were wondering what they
put their life on the lines for, the
National Aeronautics and Space Administration, NASA civilian astronauts
are paid on the general service pay scale
used for civilian U.S. government employees.
(33:59):
According to the federal pay, astronauts would rank
GS-12-13, translating to $84,365 to
$115,079, according to the 2024 GS general
service rates.
Um, so that's interesting, right?
(34:23):
And so in 2025, the salary for NASA
astronauts typically falls between the general schedule pay
scale with the higher tier of being the
GS.
Now they moved up to 15.
So it was 12 to 15, now it's
15, offering a base salary ranging from $125
(34:43):
,000 to $163,000.
Annually, um, in fact, Zip Recruiter had reported,
uh, salary data with some averages around $112
,198 annually and hourly rates ranging from $41
to $75.
Additionally, astronauts may receive per diem allowances for
(35:05):
expenses while on missions.
And this was all according to a post
that I got in one of the journals.
And it may allow them to earn additional
income based on experience and specific roles within
NASA.
So, um, I don't know about you, but
I don't think I would definitely board a
space shuttle anytime soon.
(35:26):
I'm okay with flying on the plane, but,
um, don't think I'm gonna board a space
shuttle, especially when they've had all these, these
little issues.
Now Apple says no, and that's a big
no, not a, like a maybe.
Apple says a big flat out red no
to iPhone production in the United States.
Why?
(35:46):
Well, despite increasing political pressure and proposed tariffs
from former president Trump before he became Trump,
president now as a former president, Apple remains
firm in its decision not to manufacture iPhones
in the United States.
You're like, huh?
Well, experts say the U.S. lacks the
infrastructure, labor, specialization, and scale needed to produce
(36:08):
iPhones officially.
No, what it really is, is that it's
going to cost a lot more money to
get the labor to produce these phones in
the United States.
And that would significantly raise the cost of
the phone.
Forget this nonsense about the tariffs.
If we produce the phone in the United
States, the costs would right now skyrocket because
(36:31):
we're just a mess as a country for
technology.
Shifting production stateside would not only raise costs,
but could also affect product quality.
And they say release timelines.
Now Apple is instead investing 500 million folks
in its U.S. operations focusing on R
(36:51):
&D, research and development, and support services.
And areas that don't require the same scale
as the hardware assembly.
But the thing you might want to know,
I think this is really interesting.
How much does an iPhone, let's say 15,
cost to make in, make one at a
(37:17):
time?
How much do they get cost to make?
Well, they claim it costs $558 to manufacture.
The iPhone Pro they claim costs $523, while
the standard iPhone costs $423 to manufacture.
(37:39):
So this is wacky, okay?
The assembly cost, the amount paid to the
dudes that take the parts and put them
together is about $10.
So this number is totally BS, right?
I think they're inflating it and not really
giving us the truth.
And so labor costs, right?
(38:03):
Labor costs.
People seem to forget that Apple, along with
other megacorps, are publicly trading.
They have to reveal their profit margins.
It's pretty easy to discover how much profit
they make on an iPhone, and therefore how
much it costs per unit.
At that margin, it's about 50% for
(38:24):
the iPhone 15 Pro.
Of course, this margin changes as the cost
of production tends.
So that's what they're saying, okay?
The materials are way more expensive than $100.
I know people say it costs $10 to
make.
Yes, but the materials they're saying are expensive.
(38:46):
And the question is, why?
Why are the parts in the iPhone so
expensive to get?
Well, there's a reason.
Apple's high quality standards, the complexity of their
manufacturing process, and the cost of premium components.
(39:09):
And so we know that Apple won't do
anything unless they make money, right?
We all know that gimmick about, you can
repair your own iPhone, but you gotta rent
this whole refrigerator.
I'm joking, but really, this big clawed thing
that's just a mess.
And then you rent it, and then you
(39:30):
have to return it.
And then you have to give a deposit
on top of that, in case you damage
it.
All right, guys.
So the FAA, Federal Aviation Authority, orders the
SpaceX Starship investigation to commence immediately after the
recent SpaceX Starship test flight veered off course
(39:52):
and disintegrated off into a path over the
Indian Ocean.
The FAA has launched a formal investigation, and
the rocket, though it flew longer than past
versions, experienced a loss of control during re
-entry.
Now its booster also broke apart during descent
(40:13):
over the Gulf of Mexico.
Fortunately, debris landed in designated safety zones, and
no injuries occurred.
Elon Musk has emphasized the need to speed
up Starship development for future Mars and Moon
missions.
But those plans are now on hold pending
FAA approval.
Of course he does, because he just wants
to dump more money and make more money,
(40:33):
right?
And Texas social media ban for minors, unfortunately,
is at a stall.
A proposed Texas law aimed at banning social
media access for minors under 18 has stalled
in the state legislature.
Although it passed the House, it failed to
gain traction enough in the Senate amid constitutional
(40:55):
concerns and opposition from tech companies.
The law would have been one of the
strictest in the world, surpassing even Florida's and
Australia's child safety laws.
While the bill appears dead for now, it
reflects a broader movement among states to regulate
social media usage for teens in the wake
(41:18):
of growing mental health concerns.
That's a big issue, guys.
That's a really, really big issue.
Now I know you might be saying, gee,
do we monitor certain things or do we
not monitor?
I think that becomes a catch-22 for
everyone, right?
How do you monitor things?
How do you give control?
(41:38):
Who is going to get the control?
And who is not going to get the
control?
I think that's what this comes down to.
But I know, ladies and gentlemen, that people
are being given wrong information every single day
of their life.
I mean, every single day they are being
given this kind of information.
And that's a problem, guys.
(41:59):
That is a very, very, very big problem.
And the European Union investigates adult sites over
child safety failures.
The European Commission has opened an investigation into
four major adult content platforms citing violations of
the Digital Services Act, the DSA, related to
(42:22):
protecting minors.
Now authorities allege the sites failed to implement
sufficient age verification measures exposing underage users to
explicit material.
One of the platforms in question basically was
an adult chat site.
And it was declassified as a, quote-unquote,
very large online platform.
(42:43):
But it still falls under the DSA, the
Digital Services Act.
And the European Union is also developing anonymous
secure age verification tech to further safeguard youth
in digital spaces.
So you might be asking, and it's a
great question, guys, what is the DSA Act?
(43:06):
So in short, the DSA Act, short for
Digital Services Act, is a piece of European
Union legislation designed to create a safer, more
transparent online environment.
It primarily focuses on regulating digital platforms, including
large online platforms and search engines to enhance
safety and accountability.
The DSA addresses issues like illegal content, transparency,
(43:30):
and content moderation, and the operation of recommendation
algorithms.
Some of the key goals are to basically
make sure that if there's any online harm,
or disinformation, or harassment, or illegal content, that
they're going to go after it.
And they're going to stop that from happening
(43:50):
in the future.
It seeks to ensure that users' fundamental rights
are protected in digital spaces, including consumer protection
and freedom of expression.
So these are very, very interesting things.
The DSA holds platforms accountable for content and
services they offer.
Things like Meta, things like Facebook, Instagram, X,
(44:11):
right?
Pinterest, right?
Yahoo, even things like YouTube.
So intermediaries have a duty to ensure that
the content on their platform does not violate
the European Union member state laws.
And I know that sounds crazy, but it's
the truth.
(44:32):
The DSA restricts the use of certain types
of data in target advertising, such as sensitive
data, like race, religion, and political opinions.
There is prohibition of dark patterns.
Platforms are prohibited from using deceptive or manipulative
interfaces that impair users' ability to make informed
and free choices.
So they're on to people.
(44:55):
Obligations for very large online platforms and search
engines.
Now, these platforms have additional obligations to manage
systematic risks and ensure compliance with DSA.
So this is a beginning start, but I
feel what's going to happen is more people
are going to accept its mission and go
(45:17):
from there.
But everything that happens in our world, guys,
happens because of a choice, okay?
So you all heard me talk about the
issue with Claude and the engineer, if threatened.
(45:38):
And I want to give a little more
highlight on this without giving you too much,
let's say, detail.
But the scenario that you are seeing here
is basically about a test scenario.
It was conducted by Anthropic for their AI
model, Claude Opus 4.
(46:00):
Anthropic set up a fictional company scenario to
test Claude Opus 4.
And they gave the AI model access to
fictional company emails, which led it to believe
it was about to be replaced by another
AI system.
And the emails also revealed that the engineer
was responsible for, well, let's say, visiting someone
(46:23):
else outside of their relationship.
And, well, what Claude decided to do was
to blackmail them if they chose to replace
the system.
So Anthropic noted that Claude Opus 4 resorted
to blackmail at higher rates than previous models
(46:46):
in the test.
The scenario was deliberately designed to limit AI's
options to either accepting replacement or attempting blackmail
for self-preservation.
Well, obviously it failed.
There are rules from many years back saying
you can't do things like this.
The systems can't harm people.
(47:08):
Do you remember the, what were those rules?
It was the rules that all computers must
follow.
I think it was the three big, they
call them three big rules, the three big
laws.
Do you remember that?
This came around, it was around, I'm gonna
say a while ago, but it's pretty interesting
(47:28):
and it's becoming very relevant today.
It's called Isaac Asimov's Three Laws of Robotics,
okay?
And so this was really meant to set
a standard, a standard that everyone would follow.
I mean, it seems pretty logical, but I
(47:49):
think now it's becoming very important.
So Isaac Asimov's Three Laws of Robotics are
as follows.
Number one, a robot or system may not
harm a human being or through an action
allow a human being to come to harm.
Pretty straightforward.
(48:09):
The system will not harm a human and
also make sure that anything that the system
does either directly or indirectly will not harm
them.
Number two, a robot must obey the orders
given to it by human beings except where
such orders, okay, would conflict with the first
(48:30):
law.
So if, for example, you said go harm
this person or go do something to this
person, well, then that would be struck because
it violates the first law, all right?
Number three, a robot must protect its own
existence as long as such protection does not
conflict with the first or second law.
(48:51):
So that tells me that it's gonna do
everything it needs to do to protect itself.
That means it might sacrifice property, okay?
It might cause environmental issues, but nothing it
does can be in violation of law one
(49:12):
or law two.
And so I think a lot of designers
today that are working on robotic design don't
realize that the ideas that they're sharing are
actually becoming part of the robot's AI database.
In short, okay?
(49:33):
And they don't understand the fact that when
this happens, AI learns.
So this is a good question.
So in a real short, how does AI
learn?
Well, AI learns by analyzing data to look
at certain patterns and make predictions or decisions.
(49:54):
Like humans learn from experiences, right?
A learning process can be categorized into supervised
learning, learning from labeled data that we already
know about, or unsupervised learning, finding patterns in
unlabeled data and reinforcement, learning through trial and
error and rewards.
So AI systems learn by processing large amounts
(50:16):
of data, looking for relationships and patterns that
can be used to make predictions and of
course decisions.
But I don't want you to think it
is that easy because there's a lot more
that has to go into this.
I mean, a lot more, guys.
You know, the whole point about AI is
that it continuously can improve.
(50:38):
And if it can improve, it should make
the world better.
Now I know what you're probably saying, John,
how did this Claude Opus 4 resort to
blackmail?
Well, there might be somebody on the team
that was developing the database that was helping
to work, helping to create the data that
(51:01):
it gathered some data points and thought those
would be okay.
This is why it's really critical when you
are building this type of system that you
have people that you have a thorough background
understanding for it.
Because if you don't, you know what happens?
(51:22):
It becomes a serious problem.
A really, really serious problems.
I know that a lot of people out
there don't realize how this can affect you.
So the data that you give a system,
you might say, gee, what's the big deal
if I give it wrong data?
(51:42):
Well, maybe nothing.
But then again, maybe something, right?
And so as we communicate with systems out
there through a variety of different data, I
mean, we could be talking something as simple
as, I don't know, putting together a process
for a company to follow.
(52:07):
It could be something as simple as managing,
let's say, a set of fire drills.
But then how do you handle it in
a manner that's not going to let people
feel intimidated, but that's going to know that
the system is there to help and support
them?
So I think this becomes a very big
(52:28):
gray line with a lot of people.
Because you all remember the other movie, right?
From the Terminator.
How they made these robots that were suddenly
controlled by an uplink.
And when that was turned on, the robots
(52:48):
overrode the programming that was originally put in,
which were the three laws.
And this uplink in the movie had caused
the three laws to basically vanish.
And that's a big, big problem.
And I know that sometimes when we think
(53:16):
about where something came from and where it's
going to, there's a lot of confusion in
between about how it got that way.
But really, the confusion should be there because
the system learns some patterns, but it learns
some patterns that people give it.
(53:37):
Okay, or information that it learns.
But most of the time, it's learning from
live examples, from data, from things people have
created.
That's how AI learns.
So if there's enough skewed data out there,
and that data suggests that it should do
(53:59):
something that's not within our norms, well, then
we got to be careful what type of
data we feed it.
We've got to make sure, I've always said
this before, that there is a human in
the loop, maybe more than one human.
So this week, as we're talking about tech
tensions, and I say tech tensions because these
(54:22):
are things that keep people up at night.
These are things that worry people.
These are things that make people not want
to use any bit of technology.
Breakthroughs, because when we know that technology can
do something better and faster, we want to
do that.
I'll give you an example of technology.
(54:45):
When it comes to printing a book, we
have a printing system.
It's like about $175,000, $250,000.
And taking that book and printing it by
hand, I mean, it prints, it cuts, it
folds, it staples, and it's ready to do
(55:07):
whatever you need it to do immediately.
That's pretty powerful.
But if I give that system wrong information,
like the wrong paper size, or maybe I
give it the wrong tray, or even the
wrong, let's say, action to take.
(55:31):
Like I tell it to punch when I
didn't want it to punch, I'd have a
mess.
Or maybe my bleed ratio is wrong.
I mean, there could be a ton of
things.
So data, guys, is all around us.
No matter who we are, no matter what
we're doing, data is everything we are breathing
and living every single day.
(55:51):
And I know that you're probably saying to
me, John, like, this is a little bit
nuts.
I hear you that it's nuts.
But I guess what I want to kind
of put you at ease is, so why
is AI not going to rule the world?
This is not a two-minute answer.
But AI needs to have humans in the
(56:16):
loop.
I've said this many, many times.
That whole concept, it's not likely.
And the reason I say it's not likely
is because if you have humans directly involved,
with what's going on, then you're able to
make a change in how it processes data,
(56:37):
what it does with data.
I mean, I think those are some pretty
amazing things, guys.
But for whatever reason, they don't seem like
they're going to support what everybody wants.
They don't seem like they're going to support
what everybody wants.
And so what I want to tell you
is that when you get fearful about AI,
(56:58):
AI is not the only thing that can
cause you a challenge.
AI can help you too.
But I think it's the fear of the
unknown, the fear of what we don't know,
thinking that can hurt us.
And I understand why people can take that
caveat.
It's very simple.
(57:20):
It's because people don't understand that everything in
life starts with a thought.
It's an evolution.
We move from point A, we don't get
to point Z.
We go to other points, and we might
not go in order.
But when we have a logical understanding of
(57:41):
what we want to do, like this whole
thing about Claude, I know they're blaming it
on the data, but I have to tell
you, somewhere in the loop, somewhere in one
of their engineers, engineers, let's say, resources that
they have, somebody in there is disgruntled.
Somebody in there purposely put some data because
(58:06):
they've studied AI learning, and they study that
if you do this, it'll get the system
to make this change, and I don't have
to do it directly.
See, that's how this works, guys.
All right, I hope you guys have a
fantastic evening, and I'm going to catch you
real soon.
Have a great week, everyone.