Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:02):
Hi everyone, I'm John C. Morley, the host of
The JMOR Tech Talk show and inspirations
for your life.
(00:45):
Hey guys, good afternoon or good evening, it
is John C Morley here, Serial Entrepreneur:
Great to be with you on the Jay
Moore Tech Talk show.
So today is April 25th, 2025 that we're
putting this out there.
It's great to be with everyone.
We have an amazing show here for you,
(01:07):
Hackers, Lawsuits and Half Marathons, Tech Never Sleeps.
And we're on series four guys, show number
17.
So definitely check that out.
Be sure to visit BelieveMeAchieve.com, of course,
for more of my amazing, inspiring creations, which
you can do 24 hours a day.
Not during the show, obviously, but definitely check
(01:29):
that out later on.
Really love you to check that out.
All right, guys, let's just get started, shall
we?
All right.
So if you're thirsty or hungry or anything,
well, feel free to go to the kitchen
and get yourself something yummy, whether it's something
cold, something hot.
Hopefully you want something cold right now, probably
because it is getting hot.
Maybe a snack.
It could be something healthy, something sweet, something
(01:50):
tart or not, whatever.
Go ahead and get that and hurry on
back.
So again, everyone, welcome to the show.
It is so great to have everybody with
me here today.
The digital world just got a little wilder.
In this jam-packed episode of the Jay
Moore Tech Talk Show, I'm going to be
(02:11):
tackling some of the boldest moves and the
oddest twists in tech today.
From robots running marathons in China to hackers
giving Seattle a Bezos-themed makeover, this week's
updates are as thrilling as they are eye
-opening.
I'll break down how Google, Meta and NVIDIA
(02:32):
are facing some legal and financial heat while
governments experiment with AI and battle over cybersecurity
laws.
Whether it's a Zoom outage shaking up businesses
or NASA getting closer to the sun than
ever, we've got you covered this week with
insights, analysis and a touch of that Jay
Moore flavor that you always seem to like.
(02:53):
All right, guys.
So our subtitle for today is Code Chaos
and Cosmic Close Calls.
Tech's wildest week yet, if I do say
so myself.
Well, number one, right off to kick off
the first pitch out of the mound here,
Google appeals the antitrust ad tech ruling recently
(03:13):
that got imposed on them.
Google is officially challenging a major European Union
antitrust ruling that accused it of unfair dominance
in digital advertising.
The case, which stems from claims that Google
leveraged its control over the ad tech stack
to favor its own services, could have far
(03:36):
reaching implications and consequences for how online ads
operate now.
If the appeal fails, Google could be forced
to divest parts of its advertising business or
drastically change how it handles transactions.
For advertisers and publishers, this could mean a
more level playing field or a more chaotic
(03:56):
one, depending on how regulation unfolds.
Google maintains that its systems provide value and
efficiency, but critics argue they suppress competition.
The outcome could set a precedent for global
tech regulation, especially in the US and the
UK, where similar concerns are rising.
(04:18):
This isn't just about fines.
It's about the future structure of the digital
ad economy.
And I have to tell you, I've given
you my bad story about Google one time.
Our company decided to use Google because we
figured, hey, you know, they know more than
everybody.
Right.
But what we were very shocked to learn
is that when we hired them, they were
dictating to us, telling us we had to
spend more money.
(04:39):
And we said, OK, fine.
You know, this is what we have to
agree to spend.
So we did as long as it brings
us some traction, because you don't spend it
until actually they show the ads for it.
So we were meeting with this one person
and the guy was just terrible.
He literally was rude.
He was obnoxious.
And then after we agreed to his terms,
(04:59):
he connected us with somebody else who was
even more rude and obnoxious.
And to tell you how bad they were,
they were showing ads to the wrong town.
We're in Franklin Lakes.
They put Franklin, New Jersey, close, but not
quite that close.
So the other thing that really dismayed me
is that you weren't really working with a
(05:21):
Google employee.
You're working with somebody from Accenture and other
companies who many people didn't know that they
were actually subcontractors for Google.
And they really just didn't care.
It was just a mess.
And when you call to complain, you would
get some kind of system telling you that
no one's available because they would check your
account.
When you finally did get somebody that got
(05:41):
back to you, they would tell you, oh,
we just have to spend more money.
I mean, any yo-yo can do that.
So I was really disappointed with them.
And I will never advertise with Google again.
And I won't ever recognize or recommend any
of our clients in our ad agency to
use them.
There's just many other great solutions until things
change, if they do.
Zoom recovers from a global outage recently.
(06:04):
So Zoom, the ubiquitous video conferencing platform everyone
knows, faced a major outage that disrupted services
across the globe.
Businesses, schools, and individuals were suddenly left without
their go-to communication tool, highlighting our deep
dependence on digital platforms for daily operations.
Zoom quickly acknowledged the issue and rolled out
(06:24):
fixes.
But users expressed frustration on social media, emphasized
the platform's role as essential.
Infrastructures are something that they need to be
monitoring more.
In the wake of this outage, discussions were
growing around backup systems and alternative tools to
reduce vulnerability to single-platform failure.
(06:45):
Zoom has promised more transparency and updates moving
forward.
But for many, the incident has already planted
seeds of doubt.
Could this lead to a diversification of video
communication services?
Or will Zoom bounce back stronger with more
robust fail-safes?
I don't know, guys.
One thing I think that's very interesting is,
(07:07):
so the school that I go to for
my master's and then it'll be my PhD
at Montclair State University, well, Montclair State University
experienced a Zoom outage.
And so this was a big problem, you
know, just last week.
(07:28):
And so when we think about it, everyone
thought that, you know, that it was something
with the university.
It was something with Montclair State University.
But what we found out was that it
wasn't actually Montclair State University.
It was Zoom, because they call it Red
Hawk Zoom, but really it uses Zoom's technology
or Red Hawk conferencing services.
(07:50):
And so this is an interesting thing.
But when this caused a complete disruption to
asynchronous classes, which are classes that are taken
remotely, that became a problem for not only
administration, but many students, right?
(08:11):
Many, many students.
Well, we'll just have to see what's going
on, and we'll have to keep an eye,
guys, because the question I have is, you
know, will Zoom do anything to fix this?
I think they might, okay?
We were getting issues basically around April 16
for domain not resolving at all.
(08:32):
Looked like a name server issue, which accidentally
got taken down by an attack.
Zoom's outage report blames a communication error between
GoDaddy Registry and the Mark Monitor.
Mark Monitor defined itself as ICANN accredited registrar.
And from what I have seen, companies are
basically shelling out top dollars to keep valuable
(08:55):
domains safe.
The whole point of paying Mark Monitor rates,
it's protecting domains from this kind of meltdown.
But I think a lot of this is
just a bunch of nonsense for money.
And these companies are not giving the services
that they should.
And I got to say, to use GoDaddy,
I'm not a big fan of GoDaddy, but
(09:15):
to use GoDaddy to manage publicly traded companies'
domains, like, they just don't seem like they're
that, I don't know, able to do that.
I know one time we used them in
the past, and it was a complete nightmare.
Like, you got people that just didn't even
understand what we're talking about.
So we'll have to see what's going on
with that, guys, and we will definitely give
you feedback.
We will definitely give you feedback.
(09:36):
Well, robots complete a half marathon in China.
In an extraordinary display of robotics and endurance,
a group of humanoid robots completed a half
marathon in China recently.
This groundbreaking event wasn't a sci-fi stunt.
It was a real test of robotic coordination,
balance, and durability over long distances.
(09:59):
Now, the robots built in the system were
by various companies and universities.
They navigated the course autonomously, handling terrain changes
and minor obstacles without any type of human
intervention or control.
The feat signals massive progress in both AI,
artificial intelligence, mobility, and physical design, potentially opening
(10:21):
the door for robots to take on more
real world applications in search and rescue, healthcare,
and beyond.
Engineers behind the project said it's less about
speed and more about consistency and control.
Spectators were amazed not just by the technical
accomplishment, but by the vision of a future
(10:42):
where robots run alongside us literally.
This could be a glimpse into how physical
AI evolves beyond factory floors.
This is kind of interesting if I do
say so myself, but I think in one
breath, I think it's a big issue because
(11:04):
we have to make sure there's certain fail
-safes in place.
I mean, you've heard me say this before,
right?
We have to make sure there's certain fail
-safes in place.
If they're not in place, then that's a
problem and we'll just have to see what's
going on.
So we're definitely going to keep our eyes
peeled with this, guys, and we'll let you
know what's happening.
If it's something that's going to help us
or hurt us, you'll be the first to
(11:25):
hear it right here on the Jay Moore
Tech Talk Show.
And NVIDIA hit by a US chip export
ban recently cost.
So NVIDIA, one of the world's leading chip
makers, has taken a serious financial hit following
a US government ban on chip exports to
certain countries.
The export restrictions, primarily aimed at limiting China's
(11:47):
access to high-end semiconductor technology, have cost
NVIDIA a sizable chunk of international business.
Now, the company warned investors of the blow,
citing lost sales in both the AI and
the gaming sectors where its chips are in
the high demand currently.
Beyond the dollars, this move underscores a deepening
(12:08):
of tech cold war between the US and
China.
Now, NVIDIA is, of course, currently reevaluating its
production and their distribution strategies to stay compliant
with minimizing revenue losses.
Experts say this could delay development in cutting
edge AI systems abroad and reshape the competitive
(12:30):
landscape for chip design globally.
It's a textbook example of politics disrupting innovation
pipelines.
And I think we're going to see a
lot more about that.
Speaking about pipelines, it's actually a topic that
we're studying this week in my advanced MIPS
(12:50):
class.
And my final for it is actually coming
up this Monday, by the way, guys, which
is April 28th.
It's actually my last day of class.
And I thought I would bring this up
because I think it's kind of interesting.
What is a pipeline in MIPS?
So again, MIPS is the assembly language.
(13:11):
And so a pipeline is a technique used
to improve instruction throughput by overlapping the execution
of multiple instructions.
Think of it like an assembly line of
factory where one instruction is being executed.
The next one can be decoded and another
can be fetched all at the same time.
(13:31):
Now MIPS, the pipeline, has five classic stages.
We have the instruction fetch, we have the
instruction decode register fetch, we have the execute
address calculation, we have the memory access, and
the write back.
And so why do you want to use
pipelining?
Well, it increases performance.
Multiple instructions are in progress at once.
What I want to explain to you, though,
is that it doesn't make a task finish
(13:54):
any quicker for the time it takes.
It's just the amount of time, so the
cycle time doesn't change.
But how long we have to wait changes
because we don't have to wait for something
to be done.
So we think of it like a laundromat.
And we have a, let's say we have
a washer, we have a dryer, we have
a folding area, right?
And then we have a place, we have
(14:16):
to put things in storage.
Now, each one of those, let's say, took
30 minutes.
Outside of pipelining, okay, they're still going to
take 30 minutes.
So 30 times four is 120.
However, if we can stagger the starts so
that in cycle one, we could start washing
in cycle two, we can also do so.
(14:39):
So that's the handy thing.
We can stagger how that works, assuming we
have more machines and things like that.
But that can help us.
It also basically introduces some challenges.
We call them hazards.
I won't get into them too deeply.
One is a data hazard, which is anytime
there's an instruction that has a dependency on
something else, like a resource such as a
(15:01):
register, memory, et cetera, they're control hazards, which
come from branches.
And then there are structural hazards.
So structural hazards are more resource conflicts.
So data hazards actually are instruction dependencies.
So like a register, like if this, let's
say I write a register and then I
need to read that register, but it's not
finished writing, I can't read it.
A structural hazard is when the resource is
(15:24):
already in use, such as like an ALU,
arithmetic logic unit, memory, et cetera.
Now, the whole point is we want to
prevent stalling, but I'm not going to get
into that really deeply.
The reason I want to share that with
you is that pipelining is changing where we're
going.
It definitely is changing where we're going.
And when we think about pipelining and how
it works, it's a very robust type of
(15:45):
factory kind of system.
They'll give us more throughput and hopefully allow
us to mitigate more of the challenges which
come up, because a lot of people don't
understand what these challenges are.
And if we're building a pipeline environment, whether
it's in a production world or whether it's
in the factories that connect with politics, we've
got to understand how that works, right?
(16:06):
All right.
And jumping on to number five.
Well, the UK council uses AI for housing
plans.
Check this out.
A UK local government has begun experimenting with
artificial intelligence to develop new housing plans that
better align with community needs.
By analyzing data or current housing trends, environmental
(16:26):
impacts, and social factors, AI could generate urban
designs and resource allocations that are more sustainable
and equitable.
This marks a significant milestone moving forward toward
integrating AI into public planning, which will lead
to smarter data-driven cities.
While AI can't replace human planners entirely, its
(16:50):
role as a powerful tool for complex decision
making is becoming clearer.
Experts caution that reliance on AI is such
at high risks, and it requires careful oversight
to avoid bias and ensure fairness.
Still, the success of this project could serve
as a model for cities worldwide looking to
(17:12):
modernize their infrastructure and address housing crises.
I think that's something that a lot of
people might be fearful of, and I think
the reason they're fearful of it is because
what if the decision made is wrong?
I always say you have to keep humans
in the loop, right?
You've got to keep humans in the loop.
If you don't keep humans in the loop,
(17:33):
guys, it can be a very, very big,
big problem, okay?
I think that's a big, big problem.
And if you don't get it, it's just
(17:53):
going to weaken the infrastructure for which we're
in.
I think that is a huge problem for
a lot of people, I mean, a lot
of people.
You don't tend to understand things like different
hazards unless you can understand what they can
do and how we can mitigate that, okay?
How we can mitigate them.
(18:16):
I think that's very, very important, guys, very,
very important.
All right, moving on to our next point.
This is the US warns Zambia over a
cyber law that is new.
So the US government has raised alarms about
(18:36):
Zambia's new cybersecurity law, warning that it could
lead to major internet freedom violations.
The law, which is supposed to grant the
government sweeping powers to monitor online activity and
punish those deemed to be spreading false information,
(18:58):
has drawn criticism from human rights groups.
The US has urged Zambia to rethink its
approach, arguing that such measures could stifle freedom
of expression and innovation.
While Zambia insists that the law is necessary
for national security and combating cyber crime, I
should say, the controversy highlights the ongoing global
(19:21):
tension between securing cyberspace and maintaining somewhat of
a civil set of liberties.
This issue isn't unique to Zambia, as many
countries around the world struggle with how to
balance digital freedom and national security.
Big problem.
Could this mark the beginning of a broader
international set of debates on internet governance?
(19:43):
It could, guys.
It really, really could.
But you know, I think everyone's just kind
of panicking right now, and they're hoping that
this is going to be the solution.
It may not be the solution, and if
it isn't, well, that's fine.
But I think we have to be cognizant
of the fact that these different technologies out
(20:04):
there, I mean, they do exist, right?
I think that's a very, very, I think
that's a very, very big problem for everyone.
And so if we understand that, then maybe
it'll be something that people would understand a
little bit better.
But I think we have to do our
due diligence before we can start, you know,
(20:24):
willy-nilly just jumping in and saying, okay,
we want to just do this.
Because I think technology has the power to
strip people of its freedoms.
I won't get into all this today, but
I think this is a serious problem for
a lot of people.
All right.
Number seven, Discord, the social media platform, tests
(20:46):
facial scans for age check.
To Discord, the popular online communication platform has
begun testing of their facial recognition technology as
a new way to verify users' ages before
they enter certain channels.
The move, which they're hoping aims to protect
(21:07):
younger users from inappropriate content, has raised significant
privacy concerns.
While Discord insists the data collected will not
be stored and will not be used for
real-time verification, critics are wary of the
potential risks.
The implementation of AI-powered facial scans could
be a slippery slope toward greater surveillance, leading
(21:30):
some to question whether the benefits outweigh the
privacy trade-offs.
Supporters are starting to argue that this technology
could create safer online spaces, especially for minors.
Regardless of the side you're on, it's clear
that facial recognition is becoming an increasingly common
tool in the tech world, raising important ethical
(21:51):
questions about privacy and security.
Let's take the airport for a moment, okay?
So let's just take the airport.
So how, what does, I think it's called
ID.me, do with your facial scan when
they read it at the airport?
Let's talk a little bit about that.
So it allows passengers to securely and safely,
(22:15):
they claim, share their identity information through facial
recognition.
Now TSA says they do not copy or
store the digital ID unless it is done
in a limited testing environment for evaluation purposes
of the Agency of Facial Recognition Technology.
They say that's the only way they do
(22:35):
it.
So how does it work?
It's called the TSA PreCheck Touchless ID, and
it uses facial recognition technology to match your
face to the photo on your ID.
If everything checks out, then you may not
need to hand your ID to a TSA
officer at all.
But it's not just for PreCheck, TSA PreCheck,
(22:56):
it works for any kind of check.
What does TSA do with your photo?
Well, the photo is optional.
Your photo and personal data are deleted after
your identity is verified.
Images are not used for law enforcement, surveillance,
and not shared with other entities.
You can tell the officer if you do
not want your photo taken.
And if you don't, they simply just have
(23:16):
you stand to the side, they ask for
your license, and they make sure your face
matches the license.
That's pretty much how it works.
I mean, there's not too much to it.
But I still see there being a problem,
okay?
The problem I see is that there could
(23:37):
be an exploit, okay?
They claim it's for faster, shorter lines and
things like that, more secure to prevent fake
IDs.
They say they're not used for law enforcement
or surveillance.
The system's not connected to any police or
immigration database.
No photos are used outside the checkpoint process.
But many people, including lawmakers, aren't convinced, and
(24:00):
I'll be honest, I'm not convinced either.
I think the data is being stored somewhere.
And so, you know, senators from both parties
are now demanding more oversight.
They want the Department of Homeland Security to
prove that facial recognition is more effective than
existing systems, show it does not create bias
or unfavor errors, protect the public's personal data
(24:23):
and privacy, provide clear ways for travelers to
opt out.
Are there risks or errors?
Like all technology, facial recognition isn't perfect.
You know, we've all used our finger to
sign on a computer, and sometimes if your
finger's not correct, it doesn't read it.
I know I have one for my USB
drives.
And if my finger's not exactly right on
that, well, it's not going to let me
(24:44):
get into my secure drive.
So studies have also shown that facial recognition
can be less accurate for people with darker
skin, women and older adults.
And this has raised more equity questions about
who is most likely to be wrongly flagged,
right?
There are kids, right?
(25:05):
And digital IDs.
So right now, children under 18 are not
photographed using facial recognition.
So the traditional ID checks are still in
place for minors.
TSA is also starting to test digital driver's
licenses that are stored on smartphones at some
checkpoints.
During test periods, TSA may collect and analyze
some traveler data.
They have said this data is anonymized, encrypted
(25:28):
and deleted within two years.
I don't believe that.
What's the big picture?
Facial recognition at airports is growing quickly.
TSA plans to expand the technology to more
than 430 airports across the US.
That means it could become the default way
we verify identity at the airport, but you
can always opt out.
But even as the system spreads, questions remain.
(25:48):
Is it really about safety or is it
about surveillance?
Will opting out stay easy or will pressure
to comply increase?
Can TSA guarantee fairness and privacy as the
system grows?
And so many of you don't know, I've
been doing this for many years.
I opt out of doing the, they call
it the, basically the checkpoint, you know, when
(26:10):
you go through and you go through the
scanners, I refuse to do that.
I'll go through a manual pat down.
And that's your right.
There are a lot of rights, but they
don't, they don't publish these rights.
And I think as a traveler, you've got
to know what these rights are.
I think that's a, I think that's a
huge problem for a lot of people.
And I think that just gets into the
way our country works and democracy and, you
(26:32):
know, and things like that.
But what does it really mean?
I think it comes down to the fact
that people are getting manipulated, unfortunately.
It's because of different bills that are getting
passed and there is some political bias.
(26:54):
Let's not kid ourselves here.
There's definitely some political bias to how these
things are working.
The fact that it's misflagging certain people, well,
that's a real issue, right?
So, we're going to have to definitely keep
an eye on that because some strong concerns
of questions are popping up about privacy and
(27:14):
security.
And number eight, guys, a Tesla whistleblower lawsuit
does move forward.
You might say, John, what the heck is
this all about?
Well, this is a very interesting story if
I do say so myself.
Basically, a lawsuit from a former Tesla employee
has gained traction alleging that the company failed
(27:38):
to address multiple safety violations at its factory.
Okay.
The whistleblower claims that Tesla disregarded worker safety
standards and created an unsafe environment for its
employees, leading to a potential risks of many
sets of injuries and long-term health consequences.
(28:00):
The case is particularly significant because Tesla, okay,
I think this is important to understand that
with this being, you know, the way it
is, it's going to cause them to be
under more magnification.
(28:22):
And the scrutiny in the past for workplace
safety, especially concerning its rapid production targets, while
Tesla maintains that it upholds the highest safety
standards, all right?
That's what they claim, okay?
The legal battle could potentially spark larger conversations
about worker protections in high-tech factories with
(28:45):
growing public interest in labor rights and corporate
responsibility.
This lawsuit could have far-reaching effects on
the tech industry's treatment of employees.
Could this be the catalyst for new regulations
in the manufacturing sector?
Might be, might be.
So I think we're going to have to
be, I use the word cognizant, we're going
to have to be aware of like, you
(29:06):
know, what's going on because the big thing
everybody wants to do is they all want
to get people comfortable, right?
With a certain thing.
But then what we're finding is that they're
twisting the truth about, right?
They're twisting the truth.
And if they twist the truth, that's a
huge, huge problem, all right?
(29:30):
We'll just say, we have to see, we
have to see what's going on and why
people are doing things a certain way and
let people know that, you know, we're, we're
kind of onto them.
We know what's going on and we're not
going to be taken down a path, right?
Or brainwashed into believing that this is the
(29:51):
right way when it really may not be.
And ladies and gentlemen, meta is slang for
poor hacked account support.
Meta, Facebook, we all know meta, Facebook changed
it into meta a while back.
They thought they were hiding something by doing
that, but they're really not.
Meta is facing backlash over its handling, as
(30:13):
you said, a poor handling of hacked accounts
on Facebook and Instagram.
Users have reported struggling to regain control of
their accounts after they were hacked with some
claiming that meta's support system is slow, unhelpful,
or entirely unresponsive.
Meta users are frustrated by the lack of
direct assistance and the overwhelming reliance on automated
(30:37):
systems.
As meta continues to expand its user base,
ensuring account security is becoming more critical than
ever, the issue has prompted questions about how
well tech giants are prepared to protect users
from emerging threats.
Critics are arguing that meta needs to invest
more in its customer service infrastructure to maintain
(30:59):
user trust.
If the company doesn't act quickly, it risks
losing credibility in the increasingly competitive social media
landscape.
I mean, first of all, you can't call
Facebook on the phone.
They do have a chat system, but let
me just tell you this about the chat
system.
That chat system is basically for advertisers only.
It's not really for anything else.
(31:21):
If you think it is, well, let's just
say you're highly mistaken.
You're highly mistaken.
And I think that could be a problem.
I think that could be a big, big,
a big, big problem.
If people understood that these changes that are
(31:44):
being made or lack thereof from meta had
some other type of, let's say, a positive
attribute, but there is no positive attribute.
Meta is just not giving support.
I mean, they just, meta has become very
unresponsive.
(32:05):
Meta basically has no customer support.
Not that TikTok does either.
So only meta-verified subscribers can access meta
-verified support.
So what does that mean?
They want you to pay to be able
(32:27):
to get support.
That sounds like a real moronic way to
do that.
It's very interesting.
The small claims court became meta's customer service
hotline.
Not too long ago, one lady boarded a
plane from New Jersey to California to appear
in court.
(32:47):
He found himself engaged in a legal dispute
against one of the largest corporations in the
world and in probably the venue for their
David versus Goliath showdown.
And it would be San Mateo's small claims
court.
Over the course of eight months, an estimated
$700, mostly in travel expenses, he was able
(33:08):
to claw back what all other methods had
failed to render, his personal Facebook account.
Those may be extraordinary lengths to regain a
digital profile with no relation to its owner's
livelihood, but this one person is one of
the growing numbers of frustrated users of meta
services who was unable to get help from
(33:30):
an actual human through normal channels of recourse.
Using the court system instead was his way
to solve it.
And in many cases, it is working.
The thing is, they just don't have a
proper channel.
That's probably the best way to put it.
They do not have a proper channel to
(33:53):
handle this, guys.
I mean, I think that's really that's probably
the nutshell, you know, right there that they
don't have.
They just don't have a proper channel.
And I just see this getting worse and
worse.
Many small businesses say meta is failing to
help them recover hacked Facebook and Instagram accounts,
(34:13):
leaving them also frustrated and even traumatized by
the lack of support because this is supposed
to be part of their livelihood.
Wedding dress designer recently, Catherine Dean, described the
experience, quote unquote, as devastating, taking four months
and a personal connection at meta to resolve.
Cybersecurity firm recently reported handling 10 to 15
(34:36):
such cases weekly, while scammers increasingly use AI
and fake meta branding to trick users into
handing over credentials.
Some firms even lose access without being hacked,
often flagged by meta systems in error.
Despite growing complaints, meta has offered limited transparency
and continues urging users to enable stronger security
(34:58):
measures.
So one thing I would tell you guys
is use two factor authentication.
I know it might seem like a pain,
but use it.
If you do, you've got a very high
chance your account is going to stay safe.
And, you know, there are so many, maybe
a messenger.
Have you ever have you ever, you know,
gotten those messages from from Facebook that says
(35:21):
your page is violated, but it's a scam.
Have you ever gotten those?
Yeah.
So what happens?
This is a pretty big scam and it
says your page is about to be basically
taken down.
OK, here's why it's so dangerous.
(35:42):
The scam isn't just a hassle, but also
pose a significant threat to business owners for
a few reasons, fear and urgency.
We all know how hard it can be
to build a following on Facebook or any
other social media.
So the idea of losing years of hard
work with loss of your page is a
little scary.
Scammers prey on this fear, causing page owners
to act quickly without thoroughly verifying the authenticity
(36:03):
of the message.
Right.
So that's important.
Professional impersonation.
Unlike some scams, which can be easier to
detect in this case, messages often appear to
come from meta using official language and graphics
to deceive users.
Loss of control.
If you click on the link, provide the
message.
You may unknowingly give the scammer access to
(36:24):
your business page, putting your content audience and
financial information at risk.
So always verify the source.
Avoid clicking links.
Educate your team, report suspicious activity and enable,
enable, enable two factor authentication.
It's really, really easy to do.
Just go back to the top type security
and you can go right in there and
enable two factor authentication.
(36:46):
The rise of the new Facebook scam underscores
the importance of staying informed and aware.
Scammers are becoming increasingly sophisticated, but with the
right knowledge and precautions, you can protect your
business page.
Remember, meta will never contact you through Messenger
for matters related to policy violations or page
issues.
By avoiding suspicious links and reporting questionable activity,
(37:09):
you can keep your Facebook page safe and
secure.
But I want to warn you of something
else.
One time I was reporting content because I
thought it was helpful to report this content.
What I found out was that when you
report things too often, well, get this guys,
meta, their system will actually flag you because
you reported things too often like something seems
(37:30):
messed up about that.
Right.
So I caution you about reporting all those
things.
You can just I just go block the
people.
Right.
And sometimes I sometimes I tell them things
like, you know, this is this is crazy.
You know, I know who you think you're
fooling.
And then, I mean, it happens on LinkedIn,
but not as not as often.
(37:50):
LinkedIn basically doesn't allow that.
Now, LinkedIn is also owned by is actually
owned by Microsoft.
I don't know if you guys know that,
but Facebook shouldn't allow links to go through
something if the person doesn't know the person,
they shouldn't allow links to go through.
That's my personal feeling.
That is my personal feeling.
(38:11):
All right.
So moving on with more things.
Number 10, NASA's probe flies closest to the
sun.
Yeah, this is a this is a really,
really cool thing.
And I think it's something that a lot
of people are really happy is happening.
So NASA has achieved a monumental milestone as
its Parker Solar Probe flies closer to the
(38:32):
sun than any spacecraft in history.
The probe's journey, which began in 2018, is
providing scientists with unprecedented data on the sun's
atmosphere, solar winds and magnetic fields.
Now, this mission could ultimately help scientists better
understand space, whether it's an impact on the
earth as the probe approaches the sun.
(38:54):
It's enduring extreme temperatures and radiation, a feat
that's only possible thanks to its cutting edge
heat shield with each orbit that it makes.
Parker gathers valuable information that could revolutionize, guys,
our understanding of the sun's behavior.
Guys, I think that is like I think
(39:16):
that's like the bees knees, I think that's
something that is so I got to say
that is so awesome that, you know, they're
able to do something like that.
Right.
And I think a lot of people that
hear this like, wow, that's that's really cool
that they actually you know, that they that
(39:38):
they have that.
Right.
But the goal is not to not to,
you know, have anything direct.
They just really want to gather data.
That's really the biggest thing that they're that's
the biggest thing that they're trying to do
right now.
So this mission also serves as a testament
to humanity's ability to push the boundaries of
(39:58):
exploration and technology.
So I think that's something that we're really
got to be very, very proud of.
I mean, our space program is like I
think it's like I think it's really amazing,
like all the stuff that we're doing with
our space program.
So we'll definitely have to keep you in
the loop on that.
And ladies and gentlemen, yes, Google faces another
(40:20):
fine.
Six point six billion dollar UK ad lawsuit.
Google like like what are you guys doing?
Like you get in trouble like now so
much.
I think more people are starting to realize
that they can be sued.
And so they do go after them.
Google's facing a massive five point four billion
pound or six point six billion dollar lawsuit
(40:43):
in the United Kingdom, accusing it of monopolizing
digital advertising and misleading users.
Yes.
About how it collects and uses their data,
the lawsuit filed by a group of advertisers
and publishers claims that Google's control over both
the inventory and the auction process creates an
unfair advantage, the case could have a profound
(41:06):
impact on digital advertising landscape assets out there
if it succeeds and potentially facing Google to
overhaul its advertising business.
Now, this comes amid ongoing regulatory pressure on
big tech companies worldwide and the outcome of
the lawsuit could either basically reinforce Google's dominance
(41:27):
or set a new precedent for regulating tech
giants.
So that's definitely a big one.
And here's what I think that's pretty interesting.
A protester disrupts a Microsoft AI event.
This is one, guys.
You can't make this stuff up.
I mean, this is just like imagine being
an event and your event gets gets interrupted.
(41:47):
Like that's just crazy.
So a protester recently interrupted a Microsoft artificial
intelligence event claiming that the company's artificial intelligence
systems were being used to exploit workers.
The protesters actions have sparked discussions about the
ethical implications of AI, particularly in the context
of labor rights.
While the AI is transforming industries, some argue
(42:09):
that it also raises the risk of increased
automation, leading to job displacement and income inequality.
Microsoft, which has made significant investments in AI
development, has faced pressure to ensure that its
technology is used responsibly and ethically.
The disruption highlights growing public concerns about how
(42:32):
big tech is leveraging AI and whether it's
in the best interest of society as a
whole.
As AI continues to evolve, expect more protests
and debates on its ethical uses.
And I think, again, just like I'll tell
you a tool, I'll tell you AI, it's
not good, it's not bad, it's a tool.
(42:53):
And how we choose to use it, ladies
and gentlemen, that makes that so.
And so it's our own choice how we
choose to use this particular tool, right?
I think that's something that a lot of
people don't quite understand, but they really need
to get on board and start understanding that
because that's going to be what makes or
breaks our universe.
(43:15):
And number 13, Mr. Elon Musk, uh, Dodge,
yes.
Building a migrant database.
So Elon Musk's involvement with, um, basically Dodge.
And if you're wondering, everybody always says to
me, Hey, John, you know, uh, what, and
this is a good question.
What does, you know, Dodge, uh, stand for?
(43:37):
Everybody asks us what, you know, what does
Dodge stand for?
Um, with, with Elon Musk.
And, um, so Dodge is, um, basically, um,
this new system that he's putting in place.
And so Dodge is, um, it's an acronym
for the government.
So, uh, Dodge is a, it's, it's an,
(44:00):
it's an acronym.
Um, for a government, uh, agency and, uh,
it stands for the department of government efficiency.
Has a real fancy name, doesn't it?
Yeah, sure does.
Uh, so now that we know what it
stands for, Elon Musk's involvement with Dodge is
continuing to stir up more attention, but now
(44:22):
it's branching into an unexpected, uh, project.
Musk has recently announced that Dodge is being
used to build a database of migrant workers,
helping them access essential services and benefits.
This unexpected use of cryptocurrency for you guessed
it, uh, social good has raised eyebrows with
(44:43):
critics questioning the motivation behind Musk's move.
Some view it as a clever, uh, public
relations stunt while others believe it could open
up new opportunities for migrants to improve their
financial and social standing.
As the project unfolds, it may create a
model for future tech driven solutions to address
global migration challenges and the success of the
(45:06):
initiative could redefine how digital currencies and blockchain
are integrated into humanitarian humanitarian projects in life.
So I think that's a very, um, I
think that's an important thing, but I think
the biggest thing is, you know, what's actually
going on, right?
I think that's the biggest thing.
(45:26):
Does it, does that make sense?
Um, hopefully it does.
Um, and maybe you can understand, you know,
a little bit more about, you know, what,
what's going on.
Um, hopefully, I hope.
(45:53):
Right.
And maybe that'll make sense for you.
Right.
I hope it does, but it's got people
concerned, right?
It's got people, it's got people concerned about
what's going to happen.
Uh, what's not going to happen.
Um, and you know, we'll, we'll go from
(46:15):
there.
Uh, because I think this is a, I
think this is a big challenge, um, that's
going to happen for a lot of people.
Okay.
Um, I think that's, that's important.
Does that make sense to everyone?
I hope.
(46:38):
Um, I hope it does, but I think
a lot of you are concerned about the
political cultures.
Does that make sense to you?
(47:00):
Maybe, maybe not.
Right.
But if it does make sense, does that
mean we're going to go a certain way
or not?
(47:27):
I think this is a big issue for
a lot of people.
Okay.
And I think that's the big issue for
a lot of people here.
(47:48):
Is they've got to understand what that is
and why that is.
Well, our last story today is a very
interesting one.
Seattle crosswalks are hacked with Bezos AI voice.
What the blank is this crap all about,
right?
This is something that I think is so,
um, amazing that I feel most people won't
(48:12):
get this.
Um, they, they, they won't get it.
And the reason they won't get it is
because they're just going to be driven.
Um, and I think they're going to be
driven by what's going on.
And what's going on is, is this people
(48:33):
are abusing AI.
Uh, Seattle crossroads said we're hacked with Bezos
AI voice.
So Seattle residents were left startled to say
the least when they discovered that the city's
crosswalk signals were being hacked with a voice
mimicking Amazon's Jeff Bezos.
The prank, which used artificial intelligence to generate
Bezos voice, uh, caused confusion and raised concern
(48:54):
about the vulnerability of public infrastructure to cyber
attacks.
The hack was clearly a reminder of how
AI is being used, not just for innovation,
but also for mischief.
Authorities are investigating this, probably the best way
to say this.
Uh, the incident and experts warn that such
vulnerabilities could become more common as smart cities
(49:15):
rely on AI systems for day to day
operation.
Could this be a wake up call for
cities to rethink how they secure their technology
restriction?
I think it, I think it really can.
But as I said to you guys before,
AI is a tool.
It's how we choose to use it.
That makes it so.
Does that make sense to everybody?
(49:38):
Hopefully it does.
And maybe you'll be able to understand that
just a little bit better.
Right.
Uh, and by being a little better, I
think you might get a sense for like,
you know, what it's about, right?
What it's about and what it's about is
(50:00):
something pretty cool.
Um, and that's the fact that we have
the ability to make these changes in our
life.
Right.
We have the ability to make these changes
in our life.
I mean, literally we can make these changes
like immediately.
And I think that's a, I think that's
a big thing for a lot of people.
(50:20):
They don't realize that we have this ability
right in our own world.
Okay.
Um, that's a, that's something that I think
is scaring a lot of people.
Why?
Because the decisions we make, okay, are impacting
not only the policies that are being created.
(50:43):
Okay.
But it's also shaping the culture.
Um, it's shape, shaping things like, you know,
how we went from right.
Work to no work, right.
We had the people wanting to work from
home.
Now they found, like, I kind of ended
that.
So I think it's, um, I think it's
a big problem and I see it only
getting worse.
(51:05):
I see it only getting worse and it's
going to be a problem guys.
It's going to be a problem for people,
but I don't want you to be scared
(51:25):
of it.
I want you to understand that you can
make some big changes in your life.
Um, and these changes are, they're very, very,
very, very powerful.
All right.
Very, very powerful.
But it starts from where we are now,
where we are now is, is where it
(51:47):
comes from.
And that's a pretty big deal.
And I think that's going to make the
difference to whether something is going to work
or not work.
Okay.
I mean, I think that's the big thing.
Um, and I think if you understand that,
(52:08):
then you probably understand the whole concept of
AI.
Artificial intelligence was designed to help our world.
Okay.
Artificial intelligence was never designed to be like,
um, how can I say to be something
people should fear?
But unfortunately, a lot of people in our
world have been using it as a crutch.
(52:31):
People have been using it as excuses to
not do work.
Right.
I don't have a problem with people using
it for, you know, to let's say do
research, but it shouldn't replace our work.
Right.
That's important.
But if you don't understand that, then how
(52:51):
do you even begin to think?
How do you begin to think?
I think the thinking has to come from
a very interesting spot, an interesting place.
And that's something that I think a lot
of people don't understand.
(53:13):
It requires a different mindset.
Does that make sense?
Everybody?
(53:40):
I think if you understand a mindset, then,
um, maybe just, maybe you'll be able to
make some very positive changes in your life,
but mindset for AI is something that I
believe a lot of people like they shut
(54:01):
down about.
Um, and they shut down about it.
You know why?
Because they don't know.
Because they don't know.
And they don't want to know.
They just make that choice.
So in this week's episode of the Jay
Moore tech talk show, I dove very deep
into the major wave, shaking up the global
(54:22):
tech scene around the world.
Google pushing back against the antitrust ruling that
targets its dominance in the ad tech world.
While a UK class and action lawsuit threatens
the company with a 6.6 billion, um,
price tag in the U S or the
five point something in the UK.
At the same time, the company is under
scrutiny for its role in digital advertising practices.
(54:44):
Meanwhile, discord is testing facial recognition, right?
Scans to verify user ages, stirring up conversations
around privacy and youth protection online over in
the AI world, a protester, a protester stole
the show at Microsoft's latest AI event, drawing
attention to the growing ethical concerns surrounding artificial
(55:07):
intelligence.
I mean, these are serious things, guys, really,
really, uh, serious things.
And, you know, we also covered the international
developments that can't be ignored from Zambia, which,
uh, warned by the U S over the
new cyber legislation that could harm digital freedoms
(55:29):
to a UK council implementations AI to design
housing plans.
Governments are actively redefining the tech landscape every
single day, even in China, guys, innovation took
center stage as robots completed a half marathon
showcasing incredible strides in robotics.
Tesla is once again in the hot seat
again, um, in a legal battle over a
(55:51):
whistleblower lawsuit moving forward now, because the judge
says it's okay to move ahead.
And Metta is under fire once again, for
failing to properly support users whose accounts were
hacked yet another strike against the tech giants,
user protection policies, and then just saying that
they need to charge money to get you
verified so they can provide support really bad.
(56:12):
And finally, we touched up on some headlines,
making moments that blur the lines between bizarre
and brilliant.
NASA's Parker solar probe has made it closest
ever in their approach to the sun, helping
humanity better understand our solar system.
Elon Musk always in the spotlight is now
rumored to be using a Dodge label entity
(56:32):
to build a controversial, um, yes, a very,
uh, controversial migrant database, uh, through his development
in, uh, clouds in speculation to top it
off.
Seattle's crosswalks were recently hacked to play Jeff
Bezos voice using AI, turning a quiet city
moment into a surreal tech prank.
(56:55):
And so the thing about this is that
it wasn't just like things that would probably
just like alert somebody.
These were like offensive messages that were being
loaded and that were hacked into the system.
So I see that as a, a big,
big problem.
You know, I mean, I think that's a
(57:15):
big, I think that's a big problem for
everybody.
Um, what do you guys feel?
I hope you guys can appreciate where we're
going with tech.
And, um, I hope this makes some sense
(57:37):
for you.
All right.
I really do.
Um, and so, um, I think if we
can understand this, then maybe we can definitely
do some pretty cool stuff.
All right.
Um, and I think that's a pretty cool
thing for everyone to understand that we can
(57:59):
do some amazing things, but we first have
to realize, ladies, gentlemen, that it comes down
to who we are and the decisions we
make.
All right, guys, I'm John C.
Morley, serial entrepreneur.
It's always a privilege, pleasure and honor to
be with you guys and the amazing days,
weekends, and evenings here.
Do check out believemeachieve.com for more of
my amazing, inspiring creations.
I'll catch you real soon, everyone.