Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:06):
Hi everyone, I'm John C. Morley Serial Entrepreneur, the host of
The JMOR Tech Talk Show and Inspirations for
Your Life.
(00:50):
Well hey guys, it is John Seymour here,
serial entrepreneur.
You're tuned into The JMOR Tech Talk Show.
A big welcome to everyone that is joining
us for the very first time, and if
you are not joining us for the first
time, well, welcome back.
It's always great to have you here on
The JMOR Tech Talk Show.
So our show today, which this is the
(01:14):
first day of August, so that's probably a
big deal for a lot of people.
And this topic today, I think you're really
going to, I think you're going to like
today's topic, because it connects with a lot
of things that we're talking about in life
every day in regards to technology.
And that topic, ladies and gentlemen, is this,
(01:37):
power moves and digital shocks.
The weak tech got real, series four and
show 31.
Welcome everyone.
If you have not checked out BelieveMeAchieve.com,
please do that after the show.
It's available 24 hours a day, where you
can find my latest short form and long
form content, as well as articles and all
(01:57):
kinds of other stuff.
So definitely check that out.
All right, well, if you are thirsty, guys,
I know I have my RO water here.
Feel free to go get yourself some water,
maybe a snack, a sandwich or some chips
or fruit or whatever you would like.
And I will go ahead and get this
(02:18):
show started, because I think the most important
thing when we think about the show is
making sure you're comfortable and that you're going
to definitely learn a lot of information.
So I am John C.
Morley, serial entrepreneur, podcast host, coach, graduate student,
video producer, member of the press, engineer, and
(02:39):
so much more.
And my question is, are you ready to
dive into the week's biggest tech shakeups?
I know I am.
From Elon Musk shutting down Starlink in the
Ukraine to a $2.3 million funding for
ocean AI sensors and Intel's bold restructuring act,
all on this episode, we're going to cover
it.
So sit back, everyone, and get ready to
(03:01):
be amazed.
Plus, Meta's political ad banning in the European
Union and Waymo's self-driving green light, privacy
breaches, Tesla protests, cutting-edge digital defense moves
by Chicago Sky, and more.
So don't miss the insider scoop on the
biggest battle shaping tech's future.
Stream it now, watch it later, share with
(03:22):
your friends.
You definitely will enjoy it.
The JMOR Tech Talk show features fresh
insights, usually within 24 hours of the show
airing.
Tune in at thejmortechtalkshow.podbean.com.
And you can also visit believemeachieve.com for
exclusive content as well.
All right, guys.
(03:43):
So again, so much going on, right?
Mr. Musk, what he's doing here, this is
really, I'm going to say, a little bit
of a challenge for a lot of people,
okay?
So on the show, we unpack the biggest
(04:04):
tech news innovations every week, fast, fresh, no
fluff.
If you want the real scoop on the
moves, shaking the digital world from groundbreaking AI
to political drama in tech, this is your
go-to podcast.
Tune in for the insights that keep you
ahead of the curve.
New episodes drop within 24 hours of airing,
so don't miss them.
All right, so Mr. Musk pulled the plug,
(04:27):
well, literally, not just figuratively, on Starlink's operations
during the Ukraine's attack.
In a controversial move, Elon Musk ordered Starlink
satellite service to shut down just as Ukraine
launched a key counter offensive, sparking worldwide backlash
and raising guys big questions about the power
(04:51):
billionaires hold over modern warfare.
And that can be scary to a lot
of people, I'm not going to lie.
That's why it's important that everyone understands that
we need to do things for the greater
good of all concerned, not what's just going
to be best for someone's pocket, all right?
And in the stunning move that he pulled,
(05:12):
he reportedly ordered the shutdown of the Starlink
satellite service, as I said, as Ukraine had
launched a key counter offensive against the Russian
forces, a decision that may have shifted the
course of battle from space.
The tech titans call has ignited global backlash
about this, raising questions about the power of
private billionaires in modern warfare and the national
(05:33):
security, as I said.
But was this a strategic misstep or something
more?
The world is watching, and I think right
now, Elon has a lot going on, which
we'll get to, but I think this is
a big thing for most people.
When you can basically have control of a
whole empire or city or country at the
(05:55):
palm of your hand, that's pretty powerful.
And I think regardless of what level of
a leader you are or how much money
you have, you have a responsibility to act
for the greater good of all concerned.
Now, of course, you want to be profitable,
but you want to do things that are
not going to harm or hurt someone.
I've known so many people that would do
things that are going to make them money,
(06:17):
but unfortunately, it's going to cause a lot
of other people pain.
And I think that's a very, very big
problem.
Our second story on the blotter here for
today is the SpearEye AI raises $2.3
million to revolutionize ocean monitoring.
Navy veterans behind SpearEye secured $2.3 million
(06:38):
in funding to deploy AI-powered sensors that
listen beneath the waves, identifying everything from whales
to potential underwater threats in real time.
Catch that.
The future of ocean intelligence, well, it's not
around the corner.
It's actually here today.
So again, it was founded by the US
(07:00):
Navy vets.
The SpearEye AI just locked in $2.3
million in its first funding round.
They're going to be getting a lot more
to bring artificial intelligence to the ocean's edge,
literally.
Their mission is to decode underwater acoustic data
to tell the difference between whale storms or
stealth threats.
Again, all in real time.
With a $6 million US Navy contract already
(07:21):
secured and sensors that can turn any buoy
into an AI-powered ear, Spear is making
waves in defense and deep sea intelligence.
Think of it as the planner of the
deep with sonar smarts and serious venture firepower.
That's going to be pretty interesting, guys.
(07:41):
And although I think it's got a lot
of potential, I am concerned about where the
data goes and if somebody is going to
exploit it.
That's always a concern, right?
And Intel, we haven't talked much about Intel.
Intel spins off a $5.8 billion networking
unit in a CEO shakeup.
The new Intel CEO, Liv Buten, is streaming
(08:03):
the, basically streamlining, really, the company by spinning
off its networking business, aiming to cut costs
and sharpen focus on core chip technologies to
compete in the fierce AI-driven market.
In fact, if you guys hang around a
little later tomorrow, I'm actually going to be
unboxing my brand new Xtreme Lenovo laptop PC
(08:23):
that you're definitely not going to want to
miss.
And we'll probably feature it on next week's
show a little bit to give you some
highlights.
So Meta polls political ads in the European
Union over new laws.
So Meta is stepping back from political issues
and basically revoking ads in Europe due to
the stringent new transparency laws.
(08:46):
And this is the bold retreat highlighting the
growing tension between big tech and regulatory demanding
for ad accountability.
And again, everybody's doing things for money.
But if it's not doing things to help
people, then I think that could be a
serious, I mean, really, really serious problem, guys.
Really, really serious.
(09:06):
So definitely have to keep an eye on
that.
So as I said, with going on with
Meta, pulling the political ads in the European
Union over the new rules.
This starts October 2025.
And Meta will stop all political, election, and
social issue advertising across the European Union platforms,
(09:27):
blaming the bloc's new transparency and targeting of
political advertising law for creating significant operational challenges
and legal uncertainties.
Wow.
This is really going to, I think, put
a hole in Meta's pocket, literally.
But I guess they're tired of all the
legal battling and fighting.
This bold retreat echoes Google's earlier move and
(09:50):
highlights big tech's growing pushback against tough European
Union regulations aimed at fighting disinformation and boosting
ad transparency, which is something we have a
big problem with the United States.
Meta warns the ban will hurt voter access
to critical information, sparking debate over the balance
between regulation and free speech in the digital
age.
(10:11):
Now, my question, ladies and gentlemen, is, so
if they're going to do this in the
European Union, will they eventually do it in
the United States?
You know, that's a great question.
I don't have the answer to it.
But I think the European Union is really
on the ball with this.
And by being on the ball, I think
that's something that a lot of people don't
(10:33):
get.
And I think hopefully you'll be able to,
let's say, understand this and maybe make the
right decisions just for you, but also for
the world at large.
Because when we do that, things change.
(10:53):
It's all about the fact that people do
things for money every single day.
And I know that when we think about
this whole step in this law, people don't
want to get sued.
I mean, that's just the bottom line.
(11:13):
And so everybody has been kind of like
sidestepping, like, you know, what do we do?
What do we not do?
And it's a real problem.
Our next point on the list today is
Waymo.
Waymo's self-driving crash probe officially has closed.
After a 14-month saga, the federal safety
authorities ended their investigation into minor crashes involving
(11:37):
Waymo's autonomous vehicles, signaling confidence in the future
of driverless technology.
So it's interesting.
So with the U.S. shutting down the
Waymo self-driving collision probe after the 14
-month investigation into 22 reports of minor crashes
and unexpected behavior, the NHTS, the National Highway
(12:00):
Traffic Safety Administration, NHTSA, has officially closed its
probe into Waymo's self-driving vehicles without needing
any further action.
So citing Waymo's proactive recalls and software updates
that improve safety and obstacle detection, the agency
signaled confidence in the technology as Waymo's rolling
(12:21):
out over 1,500 autonomous cars across major
U.S. cities, serving 250,000 fully driverless
rides weekly.
So this marks a major win for autonomous
tech and a green light for the future
of driverless mobility.
The question you might be asking, and I
(12:42):
think it's a very good question to ask,
are we ready?
I mean, I think this is a great
question.
Are we ready for autonomous vehicles?
And I think the answer to that is
a mixed bag.
Is the technology there?
The rate is for autonomous vehicles is a
(13:02):
complex question.
It's an ongoing debate.
If something makes a mistake, who do we
go after?
Well, we go after the car manufacturer.
We go after the software company.
We go after the people that built the
roads.
I was having a conversation with somebody the
other day, and they're like, yeah, we go
after everybody.
So now with the AI world, it just
adds another person that they actually can, organization
(13:23):
or company, that they can hold liable for.
And although that sounds interesting, I think it
becomes a problem because most people don't understand
what this means.
The fact that Waymo is acting responsibly, which
(13:46):
I've got to give them a kudos to
that.
And I think we have to understand what
this really means to us and to our
life and to our world.
And maybe if we understand these things, then
maybe we'll be able to understand just a
(14:09):
little bit better about what's going on in
our world, right?
It's not something that just happens one day.
This is a continual compounding evolution.
Think of it like this, guys.
You put money in the bank, right?
And then that money, because you want to
put it in the best bank so you
get the best interest or CD or investment,
whatever it is, you want to get money
(14:30):
back, right?
You want to make more money by just
having it sit there.
But we know that a lot of times
when you invest money, if there's a low
level of risk, you don't always make something
unless you've already built something, like if you're
a property owner and you're renting property.
But there is always some risk, guys.
There's risk in everything that we do.
(14:51):
And so right now, if we were to
look at the numbers, what is, let's say,
the average interest in a personal, let's say
a personal checking account or savings account?
And I think what we'll find out is
that it's not as high as we would
(15:11):
want it to be.
Saving accounts now are basically yielding about 0
.38% APY, annual percentage yield.
However, high yield savings accounts can offer significantly
higher rates, sometimes exceeding 4% APY.
Checking accounts, 0.07, with some that can
exceed 4%.
(15:32):
In general, saving accounts are designed for accumulated
funds and tend to offer higher interest rates
than checking accounts.
So I think understanding this and knowing that
there always are risks, right?
And so if we understand what these risks
are and we mitigate them, see, that's how
we grow.
(15:53):
That is how we grow.
And Microsoft is coming back in the picture.
Microsoft investigates Chinese hackers exploiting SharePoint.
Microsoft is probing whether state-linked Chinese hackers
took advantage of leaked cybersecurity data to exploit
a critical flaw in SharePoint, raising serious concerns
(16:14):
about global cyber espionage risks.
Now, this is a real interesting conundrum, right?
Microsoft is racing to find out if a
leak from its early warning cybersecurity alert system
helped Chinese hacking groups explore a critical flaw
in its SharePoint software, sparking a global espionage
blitz.
(16:35):
Despite issuing patches, the vulnerability was actively used
by state-linked groups, dubbed Linen Typhoon and
Violet Typhoon, and raising serious questions about how
trusted security programs might inadvertently aid hackers with
tens of thousands of organizations at risk worldwide.
This investigation could shake the foundations of cyber
(16:55):
defense trust and fuel a fresh debate on
transparency versus security in our tech world.
Ouch.
That's a big problem.
So the question I bet you probably want
to ask, and it's a really good one.
Did you know all government, let's say, agencies
use Microsoft email?
(17:15):
Yeah.
So while Microsoft is a key technology partner
for the US government, and many federal agencies
utilize Microsoft's suite of services, including email through
Office 365, we can't really say that all
government agencies use Microsoft email, but a lot
of them do.
A lot of them do.
(17:36):
And so the fact that major agencies of
the government are using it, okay?
I'm not talking about the small ones.
I'm talking about the big ones.
But there is a, let's say, a high,
a very, yeah, a very, there is a
high percentage of government agencies using
(17:58):
Microsoft products.
I think that would be a very fair
statement to make.
In some research studies, it's been found that
80% of government employees use Microsoft productivity
software.
This reliance has raised concerns about security vulnerabilities
and the potential for cyber attacks.
(18:20):
Now, the thing is this, if the governments
are using this technology, right?
85% of them right now.
Could be a little bit higher.
That wasn't the last time they took the
poll.
If they are having so much problems with
security, and the agencies that are putting very
(18:40):
accomplished information like the FBI, CIA, if they're
doing that, that type of stuff, right?
That could be a huge problem for the
citizens at large, right?
Think about people that might be in the
witness protection program, right?
They put all that information in a database.
(19:01):
What if somebody hacked that database?
That would be a real problem.
But you see, the government really doesn't go
looking for things.
They just take whatever's there, and it's probably
going to be inexpensive, and that's what they
do.
That's just how the government works.
(19:24):
So definitely a big issue.
Like I said, with tens of thousands of
organizations at risk, it's shaking up the foundations
of cyber defense and the level of trust
that it has.
Microsoft has never been, I could say, the
be-all, end-all with all kinds of
security.
Even back in the days with Novell Network,
(19:45):
which I used to love, unfortunately, they just
didn't have the money or the market share
that Microsoft did.
Novell did everything right, and Microsoft has tried
to copy a lot of their stuff.
In fact, when you use Novell Network to
actually create policies for directors, I mean, they're
the ones that did containers first.
(20:06):
And so when you do that kind of
stuff, you can do it very easily in
Novell, because you can just assign it anywhere.
In Microsoft land, well, you have to make
sure it does inherit.
There's a lot of other pains that you
have to deal with.
But NetWare was much more secure than Microsoft.
In fact, Microsoft doesn't even use their own
(20:27):
security to protect their own infrastructure.
That tells me something right there, guys, right
there.
Woman's data app, Tee, suffers a major data
leak.
This anonymous woman's dating app, Tee, experienced a
serious breach with 72,000 private user images
(20:51):
exposed.
This incident shines a harsh light on privacy
challenges in apps designed for vulnerable communities.
And I think everybody wants their own app.
For me, I think we need to get
away from the apps.
I think these apps are just a problem.
(21:13):
The app is called, but it's called Tee
for Women is what the app is called.
And they have things like, they call it
dating safety tools that protect women.
So it's an app basically just mostly for
women looking to date.
And so they talk about things like background
(21:34):
checks, catfish image search, sex offender search, phone
number lookup, criminal record search.
Navigating the modern dating scene can be very
daunting, they say.
And they have over 4,647,000 women
they claim dedicated to empowering each other and
(21:57):
getting access to a suite of dating safety
tools made just for the FBI, they call
it, quote unquote, girlies.
And so the platform seems like it's trying
to do everything that it can.
However, they did suffer a major data breach,
as I was explaining to you.
(22:18):
And so they call it spilling the tea
on Tee.
Tee was born from a deeply personal mission
to give women the tools they need to
date safely in a world that often overlooks
their protection.
The founder of the organization is Sean Cook.
He launched Tee after witnessing his mother's terrifying
experience with online dating, not only being catfished,
(22:41):
but unknowingly engaging with men who had criminal
records.
Realizing that traditional dating apps do little to
protect women, but it's also men too.
Sean knew something had to change.
That's why he built Tee, the first ever
dating safety platform for women.
And again, it's basically a marketing thing, right?
(23:05):
Is Tee a great app?
I can't tell you that if it is
or isn't.
But the fact that they had this big
breach, okay, that's a problem.
So you might be saying, what is women's
Tee doing about the breach?
Well, right now they're being extra vigilant, okay?
(23:30):
They claim they've engaged third-party cybersecurity experts
and internal security teams to reinforce system security,
according to a statement they made to the
USA Today.
Tee states that it has implemented additional security
measures and resolved the data issue.
Don't know if I believe them 100%.
They disabled some messaging.
(23:50):
The direct messaging feature has been temporarily disabled
on the app because there seemed to be
a bug that was causing some breaches from
that.
The effective system containing the breach data has
been taken offline as a precaution.
And Tee is collaborating with law enforcement agencies
investing in dating the breach.
The breach actually affected users who signed up
(24:10):
before February 2024, exposing tens of thousands of
images, including selfies and photo identifications used for
verification.
There are also reports of leaked direct messages
containing sensitive conversations.
And this is all according to a spokesperson
at Tee who is offering free identity protection
services now to individuals whose personal information was
(24:32):
involved in the breach.
But you know, it's not a dating app.
It's not credit card companies.
Everybody is going to get hacked at some
point in time if they're not properly protected.
The way you stay properly protected is you
have to be vigilant.
You have to do audits.
Even ourselves in the industry, we always audit
ourselves all the time.
(24:53):
And I think that's really, really important.
So again, I thought that was very interesting.
I think what they're doing with Tee is
great.
But you know, how do we know that
any of this data is being secured?
And so the thing that I think that
should be happening is two-factor.
(25:14):
So does Women's Tee offer a two-factor?
Well, I guess in the past, everything was
open.
They now offer two-factor authentication.
But the thing is, you know, a lot
(25:35):
of people don't use it.
So just like we talked about, you know,
Slack.
And you know, I'm a big Slack user.
But we two-factor everything.
And the thing about two-factor, I mean,
look what happened with Disney and these other,
let's say, movie houses and theme parks.
They got in trouble because they didn't enable
two-factor authentication because they were lazy.
(25:56):
Well, I say shame on them.
But I also say shame on Slack.
Slack should have made that required.
Now, the admin can do that.
But who cares, right?
You need to make it from the company
level so that everyone has to secure their
account within, I don't know, 24 hours of
opening it, or maybe immediately.
So I think these are big problems.
(26:17):
And it's not just for today.
It's for things that are moving forward, like
I said, in the future.
But having that leak with 72,000 private
images exposed, that's a big problem.
It's a buzzy platform known for letting women
review men anonymously and has confirmed a major
security breach with hackers stealing 72,000 user
(26:37):
images, including 13,000 verification selfies and 59
,000 pictures from DMs and posts.
While they claim no emails or phone numbers
were leaked, the violation strikes at the heart
of the app built on safety anonymity.
And I think that's a big problem.
I think that right now gives them a
kick below the belt that they're not as
secure as they claim to be.
(26:57):
So that kind of puts a hole in
their marketing.
T is scrambling right now with security extras,
as I mentioned, to get things back on
track.
But I still tell you that it is
your responsibility when you use any app to
be smart.
Don't share your personal information with somebody you
just met.
Do not do that.
You never should share anything like your address,
(27:19):
your phone number.
Meet somebody for a cup of coffee, maybe
at a local coffee shop for pizza or
something.
Make sure that you get to know who
the person is before you give out your
address.
I think that's a really, really important thing.
And I hope at least this breach that
happened wakes up not the team just at
(27:40):
Women's Tea, but also other ladies and gentlemen
that are on there.
And we got a question in.
I'm happy to answer it.
And that is, what is catfishing?
So thank you for that, because I know
every once in a while, I use a
term and people are like, what is that?
So catfishing is something.
It's not just for men or women.
Catfishing is when someone creates a fake online
(28:02):
identity, often using someone else's photos and information
to deceive people.
The goal is usually to build relationships or
gain something from the victim, often leading to
emotional harm and financial loss.
There are people out there that just do
this for a living.
So catfishers use stolen or fabricated images and
(28:22):
details to create a profile that doesn't reflect
their true self.
They may use the fake identity to initiate
romantic relationships or friendships, often on the dating
app or on social media.
Deception and manipulation.
The catfisher may mislead the victim about their
identity, background, or even their location to gain
their trust.
(28:42):
Catfishers may try to extract money, personal information,
or other benefits from the victim through the
fabricated relationship that they have.
These also are people that are going to
want to text you every day.
They're going to want to get in some
type of a connection with you, right?
Some people catfish to escape their insecurities and
feel more confident in a fabricated online persona.
(29:04):
Catfishing can be used as a form of
revenge, target individuals who have wronged the catfisher.
Some catfishers aim to scam victims out of
money or even other valuables.
So creating a more desirable online persona can
provide attention and validation.
But the example, the Manti Teos case, where
(29:26):
a football player was catfished by a fake
woman named Linnae Kakua, and a man pretending
to be a retired Navy SEAL or CIA
agent to defraud multiple women on dating sites,
according to the DOJ, and also Prue Point.
So how do you spot a catfisher?
I think this is a very important thing
that I want to go over for women
(29:46):
as well as men.
Reluctance to video chat or meet in person.
Catfishers often avoid video calls or in-person
meetings.
Inconsistencies in information.
Their stories and details may not add up
or match their online profile.
Unwillingness to share personal details.
They may avoid answering questions about themselves.
Their profile may seem too good to be
(30:08):
true.
And again, it's all about building this type
of trust.
And so when somebody texts you every day,
that could be a really, really big problem.
And so just be leery with people that
try to text you every day, ask you
how you're doing, because there's probably, I'm going
(30:31):
to say, a good, let's say, probability that
they're up to no good.
Now, they're going to take a lot of
time to do this.
Some people will go through this whole nine
yards.
They hear it's your birthday.
They want to come and celebrate you.
They tell you that they want to marry
(30:52):
you.
They tell you all these things.
They don't even know you.
Then they tell you something like they're in
the army.
And this is a really popular one.
They may not be in the army, but
they go get somebody's army gear and they
take a picture.
And so they say, you know what?
I can fly up to you, but I
need money to be able to fly back
home.
Because the army only pays one way or
(31:14):
something like that.
Or I need money to fly out, but
then they only pay to fly back in.
These are all scams, big, big scams.
So I definitely want to, let's say, make
you aware of those situations.
Russia and Iran strengthened ties with new satellite
launch.
Russia launched the Iran's Nahid communication satellite into
(31:37):
orbit, marking a significant boost into orbit, making
this big change in the geopolitical and technological
lines amid growing global tensions.
Who would know that Russia and Iran would
be working together, right?
You never would have known what was happening
(32:01):
there.
So Russia, by launching this, the move from
the Vox Todny Cosmodrome follows multiple Russia-led
launches for Iran and comes months after both
nations inked a strategic partnership.
With tensions flaring in the Middle East, the
satellite liftoff signals deeper cooperation and a new
(32:24):
chapter in the geopolitical space race.
Wow, guys, that's a lot of things.
Information, a lot of information.
And we're talking about Mr. Elon Musk before.
Well, yeah.
So Tesla's takedown really heats up now.
It's not the Tesla dealerships anymore.
You know where it is?
It's at Musk's diner.
(32:48):
And that's where it's happening.
So what started as the long lines for
burgers served in cyber truck boxes at Elon
Musk's flashy new Tesla diner has now flipped
into a protest hotspot.
Activists behind the Tesla takedown movement stormed the
Hollywood hotspot, accusing Musk of funding fascism and
(33:09):
slamming his anti-trans reputation in a community
that isn't having it.
As Model Y sales slump, protesters say the
real serving is resistance, not frauds.
With 40 plus protests planned and Tesla stock
in a, let's say, a very indecisive place,
the pressure is heating up very quickly.
(33:30):
And this summer, it's not just the grill
that's going to be sizzling.
It's probably going to be the fate of
their company and Mr. Elon Musk's reputation.
Wow.
Um, and I'm still like, undecided as to,
you know, that little debacle that Elon Musk
(33:51):
had with President Trump.
I don't know if that was real or
not.
I don't know if that was planned or
staged.
I don't know.
I'm still waiting to see, you know, what's
going on with that.
And, uh, very interesting, but we'll keep you
abreast of that.
Okay.
And, um, Musk removes 635,000 harmful accounts
(34:14):
to protect, um, actually it's Meta.
I'm Musk.
I've been stuck on Musk.
Meta removes, excuse me, 635,000 harmful accounts
to protect teens.
Uh, Meta is aggressively cleaning up its platform,
deleting over 600,000 accounts linked to harmful
content, targeting minors while rolling out new teen
safety features powered by, yes, Artificial Intelligence.
(34:37):
Their AI systems are making a lot of
mistakes, but hopefully they're going to get better.
I can't swear by that.
So Meta is stepping up the teen safety
with powerful new features and massive cleanup, as
I said.
And, uh, by removing the 635,000 accounts
linked to inappropriate behavior, targeting kids under 13,
the company has rolled out easy options for
(34:59):
teens to block and report, uh, suspicious accounts
while AI tools detect fake ages to keep
younger users safer by automatically restricting their accounts
with over a million accounts blocked and reported
by teens.
Meta is pushing hard to protect its younger
community amid rise of concerns about mental health
and yes, online predators.
(35:21):
So what are some of these new features?
Well, I thought you'd never ask.
Uh, one thing is the one tap, uh,
block and report for suspicious accounts.
AI age verification to detect underage users lying
about their age, default private settings for teen
accounts, restricting messages to only trusted contacts for
teens, removal of 635,000 accounts involved in
(35:43):
harmful interactions, safety notices, encouraging teens to be
cautious with private messages.
So this bold move comes as Meta faces
mounting pressures over social media's impact on youth's
mental health and online safety.
We all know what happened several months ago,
um, when Mr, um, Zuckerberg got up and,
(36:05):
um, he pretty much had to be like
almost shamed or coaxed into, um, basically saying,
sorry, apologizing, but his apology was so empty.
It was like robotic.
It didn't really have feeling or emotion into
it.
And I think people knew that here's what
I think that's going to take you by
storm.
It's one that I think a lot of
people like are not expecting, but it's something
(36:27):
I think that's definitely different.
Um, Vermont revives free payphones to fight dead
zones in rural Vermont where cell signals often
vanish.
Old payphones are making a comeback, restoring them
and, um, enabling them for free to use
(36:48):
ability, providing a lifeline when digital connectivity fails.
That's a very, um, interesting thing.
If you ask me, so no bars on
your phone, no problem.
Vermont brings back payphones for free in rural
Vermont.
Um, as I said, we're cell phone services
(37:08):
just seem to vanish out of thin air
for miles.
Electrical engineer, Patrick Schlott is reviving retro tech
with a modern mission, restoring old payphones and
making them free to use with just internet
connection and his basement tinkering.
Uh, Schlott has already installed phones at places
like the North, uh, Tunbridge general store and
(37:30):
a local library, giving residents and travelers a
lifeline when digital bars will disappear from emergency
calls to afterschool pickups.
These analog lines are ringing with purpose and
they're just getting started.
So again, notice I said analog analog is
kind of going away.
Um, I think eventually, and I don't know
(37:51):
if he's doing this, he may be taking
internet and then converting it down back to
analog because that's what these phones were, was
analog.
So we're gonna have to see what happens,
but there's definitely a lot of possibilities there.
Chicago Sky teams with AI firm to block
online abuse.
The WNBA, uh, Chicago, uh, Sky partnered with
(38:13):
moonshot to deploy a counterterrorism tech that actively
protects players from online, hate stalking and doxing,
uh, a groundbreaking move in sports safety.
So I know what you're probably going to
ask me, and that's a great thing.
Uh, what is, uh, you know, what, what
is, um, what does a WNBA, uh, stand
(38:37):
for, um, in sky?
So, um, the WNBA is women's national basketball
association, in case you were wondering.
So that's, what's going on there.
Uh, another interesting word.
I know it's probably going to mean something
to a lot of you or not.
You probably like to know when I use
these words and that's the word doxing.
(38:58):
Um, it seems like an innocent word, but
it's not the action or process of searching
for and publishing private or identifying information about
a particular individual on the internet, typically with
malicious intent.
So let's say for example, um, you were
trying to maybe remove somebody from a community
board.
Maybe they were, uh, in a condo association
(39:18):
or another HOA.
And, um, your goal was to get them
off the board.
Maybe you wanted to get on the board
because there were only so many spots.
So you did some digging to find out
what the ownership was on a property, found
it.
Maybe they didn't own it all.
Maybe there was a brother or relative that
owned it.
And so because they didn't own more than
51% of it, you now decide to
take this information, either print it or submit
(39:41):
it to the news or publish it online.
And now guess what?
Hell, you've doxed.
Now they basically responded by removing that person
from office because one of the bylaws was
they have to own their home or own
their property.
And you just created something for the purposes
of basically hurting them and helping you.
(40:04):
So, um, these are very, very interesting things,
guys.
Very, very interesting things.
Um, as I said, you know, it is
game on and trolls off the Chicago sky
tech back social media shield.
The Chicago sky, um, isn't just dominating on
the court.
(40:24):
They're leading the charge against the online hate,
as I told you before.
And so they're teaming up with moonshot.
And, uh, because when it comes to player
safety, the sky refuses to play defense.
They're all in on shutting down harassment and
making social media a safer place for everyone.
You know, uh, stalking doesn't just happen in
person guys.
It can happen, uh, online.
(40:45):
And that's where a lot of it happens
today.
And people, a lot of times feel powerless
because like, well, it's just the internet, right?
So if you get a restraining order today,
make sure that you actually get a digital
restraining order, especially if it happened online.
All right.
A 500 foot restraining order doesn't mean squat.
Somebody can just stalk you on digital and
(41:06):
social media.
And that could be a big problem.
Uh, X claps back literally at French data
fraud claims, uh, Elon must platform.
Yeah.
Elon's got more trouble again.
He's in fire.
X fired back at the French authorities, uh,
fraud investigation, calling it politically motivated theater.
And defending its algorithm amid global security over
(41:29):
tech transparency.
Ouch guys.
That is a, that is a real issue
there.
I mean a real big issue.
So we've talked about some very interesting things
here today.
Uh, you know, safety, we've talked about technology,
but I think in the greater scheme of
everything, everyone wants to make money.
But if you go to make money with
(41:51):
the intent of going, trying to harm someone,
you know what happens?
Well, I'm going to tell you, you actually,
um, get a boomerang hit back in your
face.
So when you do something good, it's great.
But when you do something with the intent
to harm somebody, well, that boomerang comes right
back out and we'll hit you and you
(42:11):
get the same energy that you're pushing out.
And I think that's a problem for a
lot of people that I talk to.
They just look, oh, I want to make
money.
I want to make money.
And there's nothing wrong with making money.
It's just that you should never be making
money when it's at the cost of harming
or hurting someone.
I always said you can do everything in
this world that you want, as long as
you don't harm, hurt physically, mentally, defame, stalk,
(42:37):
et cetera.
Any other individual, whether intentionally or non-intentionally,
because when you do this and you do
it one time, do you know what happens?
Oh, I'll do it again.
I'll do it again.
Before you know it, it becomes this pattern.
Let's talk about cookies.
Okay.
I love cookies.
My grandfather, bless his soul right now, he
(42:58):
passed when I was in second or third
grade.
I won't go there now.
But he loved chocolate.
I'm one of the ones in the family
that really loves chocolate.
And so when we think about what motivates
us and what gets us going.
So I love chocolate.
He loves chocolate.
But that chocolate doesn't appear to everybody.
(43:20):
And so us having chocolate doesn't harm anybody,
right?
But if you do something like open a
business and that business actually harms somebody.
Now, if you start a business, for example,
let's say there's a restaurant and you open
a restaurant across the street, are you harming
them?
It's competition.
So it's a little bit different.
(43:40):
But really, they shouldn't allow the same kind
of restaurant, like a pizza and a pizza
to be right across the street.
That's a big peeve of mine.
But a lot of people do it.
They do it.
And I think a lot of this stuff
that we hear, I give you the truth
about what's going on.
Notice I don't put a spin on it
saying, you know, well, this is the political.
(44:01):
No, I just give you the way it's
happening.
Now, you might not necessarily like what I'm
saying to you, and that's OK.
But I'm going to give you what the
truth is.
I had somebody to the day that wanted
to come on the show.
And I explained to them that at the
end of the day, it comes down to
who you are.
(44:22):
And they want to get on the show.
And it wasn't an author.
They want to get on the show.
I think it was the motivational show and
then the tech show.
And they sold water, right, RO water.
OK, so I figured they would come on
the show and they were going to talk
about, you know, how the process of RO
works and this could really educate people and
talk about it from a very specific view,
(44:45):
but, you know, product agnostic.
A lady hits me back and she's like,
oh, she's like, yeah, I sell RO water
systems.
I'm like, OK, so how is that going
to help my audience on my motivational show?
Oh, well, if they get RO water, then,
you know, they're going to live a lot
longer.
I said, so what are you going to
teach them?
Well, I'm going to teach them and show
(45:05):
them the product.
I said, no, you're not.
I said, what would you do on the
tech show?
Same thing.
You know, you wouldn't get into the technology
of how it works.
Oh, no, no.
I go on podcasts all the time and
I tell them about my system and then
I offer them a special.
No, that's a sales pitch.
And I respond back to people saying, hey,
you know, I appreciate you taking the time
to contact us.
(45:26):
We're just going to take a pass on
the opportunity.
Thank you so much.
And people are like a little taken back
because I stand my ground.
I don't bring somebody on any of my
shows that I feel is going to harm
somebody or going to exploit me or exploit
my audience.
I mean, I think that says a lot
to who I am.
(45:46):
Right.
But just saying it doesn't mean anything.
You guys that watch me day in and
day out, I had a guest not too
long ago and he actually said that, you
know, John, I really like your show.
You may not have 10 million viewers.
And I said, yeah, but one thing is
certain.
Um, I, um, and he agreed is that
(46:10):
you're very genuine.
You know, you don't try to push a
product or a service.
And I think that's a, I think that's
a very important thing.
I don't try to push a product or
service.
If somebody wants to come on the show
and advertise, okay, that's one thing, but that
doesn't mean that I'm going to endorse that
product just because they advertise.
(46:31):
Let's say we have somebody come on, like
this lady who sells the oral water system
and she wants to advertise in one of
our spots.
Okay, fine.
I don't necessarily have to endorse that product.
If I decide to, that's fine.
Probably be a fee for that.
But in other words, I won't just do
something for money.
It's gotta make sense to me.
(46:52):
You know what I mean?
And I, hopefully, I'm hoping that that makes
sense for you guys too, because a lot
of people can be, well, they can be
bought for money.
And I think that's a big problem, you
know, when you can have your morals or
your core principles bought out by the highest
bidder.
(47:13):
This happens with political leaders.
This happens with them on the state level,
the local level.
But eventually, you know what happens?
People are like, oh, yeah, you really don't
do that.
Or you're really not what you say you
are.
So a lot of the stuff that we
talked about today, you know, how the AI
(47:37):
is being used.
And those of you that don't know, I'm
actually going back to school for my next
master's in computer science, AI, and then my
PhD.
To me, AI is a tool, just like
any other tool in the world.
Could be a weapon, could be a mixer,
could be a hairdryer, right?
(47:58):
Could be even a jack to get the
flat tire off your car, right?
Could be a stapler.
Could be even a computer, right?
And there's lots that I could list.
(48:18):
Every one of them is a tool.
Now, according to Merriam-Webster, this is what
I always like to go back to.
Merriam-Webster.
What does Merriam-Webster say?
A tool is.
Well, I thought you'd never ask.
So a tool is a noun, a handheld
device that aids in accomplishing a task.
(48:39):
A cutter, a machine, like a stapler, et
cetera.
Something such as an instrument or apparatus used
in performing an operation or necessary in the
practice of a vocation or a profession.
For example, if you're doing a lot of
measurement, you might decide to do something in
a certain way, okay?
And that is because of what you're doing,
(49:01):
right?
If you use a tool in the wrong
way, then that can definitely be a big
problem.
You won't get the results that the tool
was made for.
For example, if you try to use, let's
just take a screwdriver, right?
There's a Phillips head screwdriver, which has, you
know, the one-tip blade, basically you have
(49:22):
kind of like the line going up and
then the line going horizontally and then one
vertically.
And then sometimes they're a little bit closer
in, right?
Depending on the grooves of the screw.
And then you have a flathead, right?
If I try to use a Phillips head
screwdriver on a flathead and they're very small
(49:44):
on how the bits are pulled in where
the angles are, it's going to do nothing
for me.
It might even damage the screw, right?
Or let's talk about this.
Let's say you're just taking a nail, right?
Or a screw.
Now, I wouldn't use a screwdriver to put
a nail in, wouldn't be effective.
(50:06):
I would not use a, let's say, nail
to put a screw in.
Some people do.
But I find that that's a very, very
big problem for a lot of people because
they think that they can just, you know,
flip out a tool anytime they want.
(50:29):
And that sounds like something that you probably
don't deal with every day, but I'm going
to tell you something.
You probably don't notice it until you've been
made aware of it.
Because maybe your whole life, you've been using
that tool, OK?
(50:51):
And if that's the case, and now somebody
says, hey, you're not supposed to use a
tool like that, or, you know, hey, you
really should use the hammer for the hammer.
You should use a screwdriver for the screwdriver,
right?
I think that's a very, very big thing.
And a lot of people, for whatever reason,
(51:15):
they just don't, they don't make the deduction
of the conclusion of why they should or
shouldn't do something.
They usually need some help from somebody else
that's going to provide, well, they're going to
provide that assistance, right?
I think when we hear somebody and they
say, oh, gee, this is how I'm going
to use this, you know, that's all fine
(51:37):
and well, right?
But at the end of the day, it
comes down to using a tool for the
greater good of all concerned.
Now, I know that sounds really easy, really
simple, right?
But you can use a computer to help
others, and you can also use a computer
(51:58):
to harm or hurt someone, just like we
talked about doxing, right?
And so when we talked about doxing, you
could use a computer to dox, which is
illegal, or you could use a computer to
help find use of information, like study for
a class or something like that.
I mean, I think that's important to know.
So when Musk shut down the Starlink, I
(52:19):
thought about digital power as someone who's worked
with networking systems for decades.
I was stunned that a single executive could
impact global military operations like that.
Musk's decision to abruptly and rudely cut off
Starlink during Ukraine's counter-strike reminded me just
how fragile digital infrastructures can be when controlled
(52:39):
by a private individual.
It raises a vital question of who should
really hold the off switch.
Spear AI's ocean ventures, I once consulted on
an underwater robotics tech project.
So when I read about Spear AI getting
$2.3 million to monitor the oceans with
AI, it brought back some memories.
(52:59):
Hearing the sound profiles and talking about some
navigation was similar to a project we worked
on, but on a smaller scale.
Intel's spinoff felt like my first big pivot.
When Intel spun off a $5.8 billion
unit, it reminded me of a tough decision
I once made to sunset a product line
that we don't sell anymore, point-of-sale
(53:21):
systems.
Why?
Most of our restaurants.
Because a lot of restaurants would magically just
burn down.
And a lot of people in the restaurant
industry were very rude.
And a lot of our leasing companies didn't
want us to continue because they would open
up under a different name.
And then suddenly, well, they were out of
(53:41):
business in a month or two.
And this was like a pattern.
Meta's European Union ad ban echoed my marketing
dilemma that I had.
And that's where I got the idea to
start my own marketing company over 15 years
ago.
I struggled with ad restrictions before, especially when
promoting ethical tech solutions, even with them.
They would flag things, even though there's nothing
wrong.
(54:01):
But because so many people had did the
wrong things, it became a problem.
Waymo being cleared, this one's personal.
Autonomous driving has fascinated me since I first
rode in a self-driving prototype back in
2016.
The end of Waymo's probe tells me we're
getting closer to that future, but trust is
everything.
We have to earn it, not assume it.
(54:21):
I still think they've got a way to
go with marketing and even to prove to
us that this stuff really works consistently.
Microsoft's hack probe hit home to me too.
Years ago, I saw how a small security
flaw turned into a major breach for a
client.
Microsoft's investigation into Chinese hackers exploiting SharePoint is
a reminder even giants fall if they don't
(54:43):
patch fast enough.
Vigilance isn't optional, folks.
It is essential.
The T-App leak made me reflect on
digital ethics.
Privacy isn't optional, folks.
I've spent years building secure platforms, and this
leak sending above 62,000 private images is
heartbreaking to hear.
Apps must stop treating security as an afterthought
(55:04):
and people as numbers and as actual people
that have a personality.
If you're building tech, build it right.
Build it with the right security and build
it for the greater good of all concerned,
or don't build it at all.
Tesla's diner protests show tech isn't neutral.
(55:24):
When I opened my first company, I never
imagined people would protest a brand because of
its founder and personal beliefs.
Yet here we are.
Musk-Tesla diner controversy proves that in today's
world, CEOs don't just lead companies, they shape
a narrative.
Sometimes the narrative is awfully embarrassing and very
dramatic because they add to it.
(55:45):
Russia's Iran satellite deal made me think bigger.
I've launched tech across continents, but never into
space.
The satellite launch between Russia and Iran is
a power play, but it's also showing how
quickly tech can become geopolitical.
I believe innovation should unite us, never divide
us.
And Meta's teen safety cleanup is interesting, and
(56:08):
it did give me some hope.
I've coached youth in tech and seen how
online situations harm affects their mental health.
Meta wiping out over 600,000 harmful accounts
is a good step, but we need to
do a lot more.
AI must serve as a shield, not just
a data collector, and that's what it's primarily
doing.
(56:29):
Vermont's payphone gave me deja vu.
As a kid, I used to use payphones
in emergencies.
Seeing Vermont revive this narrative and put them
in dead zones made me smile.
Sometimes progress means rethinking what we've discarded.
Maybe analog isn't dead, it's just evolving in
a different way.
(56:50):
Chicago's Sky AI move inspired me as a
leader.
When I saw the Chicago Sky using AI
to protect their players from online abuse, I
felt proud.
A good use of technology.
That's why I mentor young entrepreneurs, to show
them that tech can make the world better
if they use it for the greater good
of all concerned.
(57:11):
And digital shield reinforces a lesson that I
preach every day.
Digital safety isn't a feature, it's a lifestyle.
And that's something I teach in every cybersecurity
workshop I lead.
Again, digital safety isn't an option, okay?
It isn't a feature, it's a lifestyle.
(57:32):
X versus France, the battle for digital transparency
when Elon Musk performed the X clashed with
France over fraud claims.
I wasn't surprised, but I was reminded that
transparency isn't a luxury.
It's a duty.
I've always said, if you build on a
platform, own what happens on it.
In other words, just take accountability, be responsible
for what you have.
(57:54):
But remember this point, digital safety isn't a
feature.
Digital safety isn't a feature, digital safety isn't
a feature, it's a lifestyle.
It's a lifestyle, it's a lifestyle, it's a
lifestyle.
Digital safety isn't a feature, it's a lifestyle.
Ladies and gentlemen, I'm John C.
Morley, serial entrepreneur.
Do check out BelieveMeAchieve.com for more of
my amazing, inspiring creations.
Be well, everyone, and use technology, especially AI,
(58:16):
for the greater good of all concerned.