All Episodes

March 7, 2025 58 mins

🎙️ JMOR Tech Talk Show – Next Episode! 🚀 🔥 Buckle up, tech lovers! This week’s episode dives into the biggest stories shaking up the industry. From Elon Musk’s latest battle over a $2.4B FAA deal to Microsoft officially saying goodbye to Skype, we’re covering it all. Lenovo scores a major win, AI exposes hidden brain lesions, and privacy concerns rise over AI data training. Plus, a massive global crackdown on AI-generated abuse images leads to 24 arrests. Don’t miss out—watch the latest episode of The JMOR Tech Talk Show dropping within 24 hours!

📺 Catch it here: https://thejmortechtalkshow.podbean.com 🌐 More exclusive content: http://believemeachieve.com

🔹 Hot Topics This Week 🔹

1️⃣ U.S. lawmakers investigate Musk’s influence over a $2.4B FAA deal ✈️💰 Elon Musk finds himself under scrutiny once again as U.S. lawmakers probe his potential influence over a $2.4 billion FAA deal with Verizon. Concerns are rising over whether Starlink could play a role in this major contract, sparking debates about fairness and competition in government tech contracts.

2️⃣ Microsoft is shutting down Skype, shifting focus to Teams 💻❌ After years of dwindling popularity, Microsoft has announced that Skype will officially shut down on May 5, 2025. With users steadily migrating to Microsoft Teams, the company is shifting its resources to its enterprise communication platform. It’s truly the end of an era for the once-dominant video-calling service.

3️⃣ Lenovo wins a UK patent dispute against Ericsson ⚖️📡 Lenovo has emerged victorious in a heated patent dispute with Ericsson in the UK. The ruling grants Lenovo an interim license for Ericsson’s technology, marking a significant win in the ongoing battle over telecommunications patents. This decision could set a precedent for future licensing negotiations.

4️⃣ Global sting operation arrests 24 for AI-generated abuse images 🚔🛑 Authorities have cracked down on a disturbing rise in AI-generated child abuse images, arresting 24 individuals in a coordinated global sting operation. As AI technology becomes more sophisticated, law enforcement agencies are ramping up efforts to combat its misuse in creating harmful digital content.

5️⃣ Canada investigates X’s AI data use over privacy concerns 🇨🇦🔍 Canada has launched an official investigation into X (formerly Twitter) over concerns that the platform is using personal user data to train its AI models. With privacy at the forefront of tech debates, regulators are questioning whether X’s policies align with national data protection laws.

6️⃣ Apple introduces ‘age assurance’ tech as states regulate social media 🍏🔒 Apple is stepping into the debate over children’s online safety with its new ‘age assurance’ technology. As U.S. states move toward stricter regulations on social media usage for minors, Apple’s new feature aims to verify ages more effectively and enhance child protection.

7️⃣ Apple faces lawsuit over ‘carbon neutral’ Apple Watch claims 🌍⚖️ Apple is facing legal action over its marketing of the Apple Watch as ‘carbon neutral.’ Critics argue that Apple’s sustainability claims are misleading, raising questions about corporate environmental accountability and whether the company’s green initiatives truly live up to its promises.

8️⃣ Meta lets users opt out of AI data training 🔄🤖 In response to growing privacy concerns, Meta is now allowing users to opt out of having their data used for AI training. With AI models requiring vast amounts of data to improve, this move puts more control in users' hands, but also raises questions about AI’s future development.

9️⃣ Meta fixes Instagram Reels glitch flooding feeds with violent videos 🚨📱 Instagram users were shocked when their feeds were suddenly overrun with violent and disturbing Reels due to a glitch in Meta’s algorithm. The company has since reso

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:02):
Hi everyone, I'm John C. Morley, the host of
The JMOR Tech Talk Show and Inspirations
for Your Life.

(00:47):
Hey guys, welcome, it is John C. Morley here,
serial entrepreneur.
Welcome to The JMOR Tech Talk Show.
It is Friday and it's March 7th, 2025.
And we have a great show for you
today or tonight, depending on when you're watching
it.
Musk's FAA Battle, Skype's Goodbye, and AI's Next

(01:10):
Big Leap.
We're on series four and this is show
number 10.
Welcome everyone.
Do check out BelieveMeAchieve.com for more of
my amazing, inspiring creations.
Not just a tech show, but lots of
things like Inspirations for Your Life and so
many other short form and long form content
to definitely give you the fuel to empower

(01:31):
you to be even better than you already
are.
All right, guys, if you have not had
an opportunity to go to your kitchen and
grab something delicious, maybe you have some water
like I do over here, go grab that.
Maybe something sweet, something tart or not, healthy
or not, it's up to you.
Come on back and let's kick this show
off, all right?
I am your host, John C. Morley, a serial

(01:52):
entrepreneur.
It's always a privilege, pleasure and honor to
be with you guys on these amazing days
and evenings.
So, thank you so much and do check
out BelieveMeAchieve.com for all that great stuff
that I talked about before.
Buckle up, everyone.
Tech lovers, this week's episode dives into the
biggest story shaking up the industry.

(02:13):
From Elon Musk's latest battle over a $2
.4 billion FAA deal to Microsoft's officially saying
goodbye to Skype, we're covering it all.
Lenovo scores a major win.
AI exposes hidden brain lesions and privacy concerns
rise over AI data training, plus a massive

(02:33):
global crackdown on AI generated abuse images leading
to 24 arrests.
Don't miss out, everyone.
Watch this episode right now because you are
going to be really happy that you did.
All righty, everyone.
So number one, it's the US lawmakers and
yes, they're investigating Musk's influence over a $2

(02:57):
.4 billion FAA deal.
Elon Musk finds himself under scrutiny, once again,
as the US lawmakers probe his potential influence
over a $2.4 billion FAA deal.
With Verizon's on the helm there, concerns are
rising over whether Starlink could play a role
in this major contract, sparking debates about fairness

(03:21):
and competition in government tech contracts.
So I think that's a big issue and
there's always been the issues when we talk
about these different types of things happening.
Who has control?
Is there any type of nepotism?
Is somebody doing something that's illegal beside that?

(03:41):
But basically, like I said, the US lawmakers
are scrutinizing SpaceX CEO Elon Musk's potential influence
over $2.4 billion FAA telecommunications contract awarded
to Verizon amid concerns that he may push
for the project to be reassigned to a
Starlink service.
Musk, a senior advisor to President Trump, has
been critical of the FAA's telecom system, prompting

(04:05):
Senators Maria Cantwell, Adam Schiff, and Tammy Duckworth
to demand transparency on the procurement process.
So reports suggest that the FAA, in case
you're wondering, the Federal Aviation Authority, is reviving
right now, but they're reviewing this contract, though

(04:28):
no decision has been made.
Musk recently admitted to falsely accusing Verizon of
jeopardizing air safety while the FAA continues testing
Starlink terminals in Alaska.
Wow.
That's just a mess.
So again, the Federal Aviation Administration hopefully will

(04:49):
be able to handle this and they're going
to operate within the confines of the law.
We hope they will.
I can't guarantee they will, but we're going
to hope.
And ladies and gentlemen, if you are wondering
what happened, well, that platform, when did Skype
start?
Well, Skype started a long time ago, actually
in 2003.
It was created by Nikolaus Zennström, Janis Friis,

(05:15):
and four Estonian developers and was first released
in August 2003.
So when did Microsoft buy Skype?
Well, Microsoft officially took over Skype May 10,
2011 for $8.5 billion.
The acquisition was completed actually at October 13,

(05:35):
2011.
So, very interesting, because Skype was founded in
2003 and it became popular for making free
international calls.
eBay bought Skype in 2005 for $2.6
billion, and then in 2009, eBay sold a

(05:57):
majority stake in Skype to investors, and then
Microsoft saw Skype as a way to gain
relevance in the video chat messaging industry.
So, Microsoft integrated Skype with its own products.
Microsoft developed the client app for Windows 8
and Windows RT.
Microsoft prioritized Teams over Skype, and Microsoft announced
that it would now shut down Skype in

(06:20):
May 2025.
I believe that's May, we're saying May 5th,
2025, that they are officially going to shut
it down.
But what does all this mean to people?
Well, let me see if I can help
you guys, I guess, shed some light on
the situation, because that's probably the best thing

(06:42):
for everyone to know, because a lot of
people I know use Skype.
So, Microsoft is shutting down Skype, shifting their
focus to Teams.
After years of dwindling popularity, unfortunately, it's important
to understand that this is definitely an issue.
Microsoft has announced that Skype will officially shut

(07:04):
down again May 5th, 2025, with users steadily
migrating to Microsoft Teams.
The company is shifting its resources to its
enterprise communication platform.
It's truly the end of an era for
the once-dominant video calling service.
So, we used to use Skype, by the
way, for Jay Moore Tech Talk and Inspiration
for Your Life when we had Colin.
We had a phone number, we still do,

(07:25):
but we're looking to migrate that over somewhere
else.
I'm not sure if I'm going to use
Teams.
I'm really a little unhappy that what they
did, so I don't really trust them anymore,
so I'm probably going to just take the
number.
I did move over some of our chats.
We use Skype for a lot of stuff,
but what I will tell you is that
we moved over for some of the free

(07:45):
stuff, like my friends, that we use something
called Jammy, which is a free platform you
can use and spin around for a while,
and so that's one that has been used
by a lot of different companies.
And, you know, Jammy is an app, so
you can use Jammy.

(08:06):
It's jammy.net, by the way, J-A
-M-M-Y dot net.
And so, jammy.net, it's used by a
lot of people, and so, you know, obviously
it's not based in the United States, but
I'll tell you, there's a lot of people
that use Jammy.
It's FOSS, Free Software Foundation, Website Planet, Forbes,

(08:27):
Cursus, TechRadar, Le Press, Team Building, Hosting Advice,
Good Firms, Ethical, Medieval, Card Rates, Cubic, Corbin,
and there's others, but the fact that I
saw Forbes on there, that was pretty assuring

(08:50):
to me.
So, the platforms that they support are the
Microsoft Windows 11, they support Linux, they support
Apple, iOS, Android, and they even, yes, support
the other Android TV platform.
So, Jammy is free, you can go to
jammy.net, and what you have to do

(09:12):
basically is you can, what I recommend doing
is you can download it on your phone
first, and then you can put it on
here.
Make sure if you do encrypt it, which
I recommend you doing, you write down that
information, because if you forget it, there's really
no way to get it back.
I see support with them as kind of
like a non-existent thing.
But again, it's a GNU package, and I

(09:33):
don't know if you guys know what a
GNU package is, but a GNU package is
an operating system that's free software.
That is, it respects users' freedom.
The GNU operating system consists of GNU packages,
specifically released by the GNU project.
So, it's government of national unity.
So, there is some value in that.

(09:57):
And so, if you try to contact them,
they have like a form that you would
contact them at jammy.net.
So, I've only been using it a little
bit, but I still use Slack commercially for
all my other companies.
But Jammy is a nice platform for just
some basic chat.
And also, there is, yeah, Jammy does support,

(10:19):
and was asked this, does Jammy support video?
So, yep, so Jammy is an end-to
-end encrypted private communication software.
And yes, it has improved video rotation support
now.
So, in recent weeks, they worked on improving
various parts of Jammy user experience, and the

(10:39):
sticking point was video orientation of mobile devices.
Until now, users had to manually press a
button to change it, which broke their expectations
and could take several seconds.
Their previous approach for managing video orientation was
to change it directly on the recording device,
requiring the whole media pipeline to be reinitialized.
This caused glitches and frame drops, not to

(11:00):
mention all the other things, in addition to
creating an interruption and bad experience for the
user's conversation, which had a negative impact on
the total experience.
So, again, I just started using it.
It's not a bad little platform.
But again, I like Skype.
I didn't think it was bad.
I have a lot of seniors that have
been on Skype, so now they're going to
have to move to something else.

(11:22):
I really don't want them to move to
Teams.
Now, people say to me, John, is Teams
free?
Well, so Microsoft Teams has a free version
you can use for work, school, or with
friends and family.
You can sign up for free.
I just don't want to play with Microsoft
anymore.
The fact that they did this, who knows

(11:43):
what they're going to do.
They might decide to just stop the whole
platform.
So whenever somebody does something like this, and
it's so unpredicted, I can't trust them anymore.
So we've moved away from Skype for the
things we're using, the few little things we're
using it for.
But then I had also put our phone
number somewhere else, so that was another issue.

(12:04):
At number three, ladies and gentlemen, Lenovo wins
a big UK patent dispute against Ericsson.
So Lenovo has emerged victoriously in a heated
patent dispute with Ericsson in the UK.
The ruling grants Lenovo an interim license for
Ericsson's technology, marking a significant win in the

(12:24):
ongoing battle over, yes, the telecommunications patent.
This decision could set a precedent for future
licensing negotiations.
So I think when a company is trying
to use technology, and they're not trying to
profit from it directly, they shouldn't be penalized,

(12:46):
and I think that's all they're trying to
do.
They're just trying to be like a utility
for it.
They're not even trying to charge for it.
So I'm really happy for Lenovo.
I love Lenovo.
I use their laptops, been using them on
my, I think I'm on my fifth Lenovo.
I dump my Lenovo every three or four
years, even though it lasts a lot longer.
I just don't want to be there at
the end of the rain, right?

(13:07):
So number four, ladies and gentlemen, is global
sting operation arrests 24 AI-generated people for
posting images that are not family-like.
Authorities have cracked down on these disturbing images
that unfortunately position minors in a very bad

(13:28):
light.
And as I said, they arrested 24 individuals
in a coordinated global sting operation.
As AI technology becomes more sophisticated, law enforcement
agencies are ramping up efforts to combat its
misuse in creating harmful digital content that exploits
children.
And not just that, it could exploit other

(13:48):
people as well, maybe even seniors, who knows?
So I'm glad to see that they're stepping
in, that they're taking this choice.
I really think that's good.
I think that's really, really important.
And I think that's an important piece for
people to understand, but a lot of people

(14:09):
don't get it for whatever reason.
And so that's really an important thing I
want to tell you, is that maybe you're
wondering why something's like that or why something's
not like that.
But I think we have to be very
concerned about what's going on.
And that's probably the most important thing I

(14:33):
can tell you.
It's really an important thing.
And so you have to realize that there's
always going to be bad actors out there,
unfortunately.
And so bad actors out there, unfortunately, don't
get it.
And the question you might be asking, and

(14:56):
it's a very, very smart question to ask,
why?
I mean, that's a very good question.
Why are there so many bad actors in
the tech space?
Why?
Well, so the tech space attracts bad actors

(15:17):
for several reasons.
They look for the high reward and lower
risk.
Tech companies, especially startups, can generate massive profits
quickly, making them attractive targets for scammers, fraudsters,
and of course, unethical players.
We call them bad actors.
Regulations often lag behind innovation, creating loopholes.
And by the time that the regulations catch

(15:39):
up to them, they've already moved to another
country, another place.
And I think that's a very, very big
problem in itself.
Of course, there's the anonymity and the complexity.
The digital nature of tech makes it easy
to hide behind fake identities, manipulate algorithms, or
exploit cybersecurity and weakness.

(15:59):
There's a lot going on there.
Rapid growth, losing oversight, many tech companies scale
fast, prioritizing growth over ethics.
Some cut corners because of price, because of
complexities, because of even their organizational policies.
Hype and speculation, new technologies, AI, crypto, Web3,

(16:20):
et cetera.
They spark hype cycles, leading to overpromising pump
and dump schemes and vaporware, right?
Things that you download and then it's just
not there the next day.
Or you go somewhere and you think you're
getting support, like even Facebook does this, right?
It's not Facebook directly, but there's a page
on Facebook, ladies and gentlemen, that you can
go to, a group, and they're asking like,

(16:43):
oh, we just need to get your username.
Why do you need my username if you're
part of Facebook?
So don't get sucked into those things.
Really, they say, oh, I'm going to be
your Facebook support agent.
They will never ask you for any of
that information, okay?
So if somebody is doing that, just walk
away, walk away, in fact, run.
Weak regulations and enforcement.

(17:04):
Many countries struggle to regulate tech effectively and
enforcement.
I should say the non-enforcement is inconsistent
and fines are often minimal compared to profits
made through unethical actions.
Data and privacy abuses.
We've seen this time and time again.
Some companies exploit user data without proper consent.

(17:24):
Some people have smoke and mirrors that they've
signed stuff they don't know they've signed, which
is kind of trickery.
Selling information, manipulating users, or engaging in surveillance
capitalism because they think they can just because
they have a lot of money.
I'm not going to go naming companies out,
but we already know many of them that
have gotten into trouble.
So you know this stuff is true.
Toxic startup cultures moving fast and breaking things

(17:47):
can turn into move fast and deceiving people.
The pressure to outperform competitors can push individuals
and companies to act unethically.
And so people, a lot of people don't
realize their actions, these bad actors are hurting
themselves, number one, and the world at large.

(18:11):
And I think that's the part, you know
what, they don't really care.
So these bad actors might see short-term
gains, but in the long run, they damage
themselves, their industries, and the society as a
whole.
So loss of trust, customers, investors, employees, they
all lose faith in companies that engage in
shady practices.
Once trust is broken, it's nearly impossible to

(18:33):
rebuild, if at all.
Regulatory crackdowns, bad actors exploit these loopholes, as
we said.
Cybersecurity, privacy, nightmares, innovation suffers because they jump
onto something new, but it's not been fully
tested, especially not just for UXI, but also
for security and privacy.

(18:53):
Self-destruction, many bad actors get caught or
collapse under their own greed.
You know, things like FTX, think Taranis, WeWork,
Implosion.
When they go down, they take jobs, investments,
and reputations all down with them.
And it's so frustrating because tech has so
much potential to create a much better world

(19:14):
for so many people.
But these bad players slow things down.
Do you think there's a way to clean
up the space, or will it always be
a game of cat and mouse?
I think we as people, business owners, people
in legislation, I think even parents and older

(19:38):
students, they need to be the voice.
They need to make sure that they're not
being manipulated.
They also need to realize that if something
seems too good to be true, guess what?
It probably is.
So you've all heard the conundrum before, right?
You know, this gives, I'm going to say,
all salespeople a bad name.

(20:02):
And so what is that?
Well, when bad actors in tech or any
industry engage in deceptive sales tactics, it unfairly
tarnishes.
So it's not just bad actors in tech
and trying to hack, but it's the fact
of trying to cheat people on something.
Like I'll give you a perfect example.

(20:23):
Maybe you're part of a tech company.
I hope you're part of a legitimate one,
but maybe you've gone to a tech company
before, right?
Maybe you have.
And when you've gone to this tech company,
you're trusting, ladies and gentlemen, that they're telling
you the truth.
Maybe you're buying a brand new laptop, okay,
for your home or for your business.

(20:43):
And what happens with that?
Well, you trust that they're giving you what
you've asked for.
Maybe you've asked for 12th generation processor, 14th
generation processor.
You've got a multi-core i7.
You asked for 32 gigs of RAM, but
they actually gave you an i7 10th generation
processor.
And instead of at 32, they just gave
you 16.
Maybe you asked for Wi-Fi 7, and

(21:07):
they might've only given you Wi-Fi 6.
And maybe you asked for...
This is a real big one.
Maybe you asked for, let's say, more storage.
So I don't just mean memory, ladies and
gentlemen.
I mean things like the amount of RAM,

(21:28):
random access memory that's on your computer.
And I know this might sound crazy or
maybe a little bit critical, but it's ultimately
the truth, ladies and gentlemen.
It is ultimately the truth.
So for example, let's just say that you
asked for a computer that had one terabyte
on it.

(21:49):
And let's say they gave you a computer
that only had 500 on it.
So now, by the time all the space
is taken away from that, that's a problem.
So if they gave you a computer, let's
say by the time it was format out,
maybe it was only 465 with some other
utilities on it.
So by the time you take away the
Windows piece, guess what you're left with?
You're left with only 23 gigabytes.

(22:11):
That's terrible.
And the price difference between, let's say, a
512 or 500 and one terabyte, it's not
that much, but they take it and they
pocket it.
Or what they do is they tell you
they're selling you a high-end SSD or
M.2. And you know what they do?

(22:33):
They give you a cheap one that has
like a no-name brand, right?
We've seen this before.
So this is the problem with this.
Well, number five this week we're covering is
Canada investigates X's AI data use over privacy
concerns.
So Canada has launched an official investigation into
X, formerly Twitter, over concerns that the platform
is using personal user data to train its

(22:54):
AI models.
This is not new.
We're seeing this from a lot of companies.
With privacy at the forefront of tech debates,
regulators are questioning whether X's policies align with
national data protection laws.
Do they?
Do they not?
And I think a lot of these companies,
and I'm not going to name them directly,
but if they can get away with something,
guess what?
They will.

(23:15):
And I've said to you guys before, it's
not what you do when people are watching.
It's what you do when people aren't watching.
Let me say that again.
It's not what you do when people are
watching, but what you do when people are
not watching.
So it means you need to be ethical
all the time.
But some people say, you know what?

(23:35):
I can gain an extra 50 bucks.
I still remember to this day, our company
is over 31 years young, the tech company,
Jamore.
And I remember doing work for this tow
yard.
I'm not going to name the name for
privacy, because I don't want to do that.
But what I will tell you is that
this tow yard that we have been servicing

(23:56):
for many years, let's say like 10 years.
So all of a sudden, they need another
server.
OK.
And this is back in the days when
we had Novell.
You remember Novell, Novell 311, Novell 312?
Back when it was on floppy disk.
And their server was in this dust hole.
It was just terrible.
So they were going to put an underground

(24:17):
vault in, which is the dumbest idea in
the world, to put the servers underground.
Why don't you just build a building?
That could be a lot cheaper.
Or even a prefab or something.
So they finally decided to do that.
And we took them from a coax network
to a complete twisted pair network.
At that time, it was actually just Cat5.

(24:41):
Cat5e wasn't really out back then.
And so we're building a server.
And at that time, servers were even more
money.
And so it was a SCSI server with
Network 3, I think at that time.
So it was 312 when we started.
But I think on theirs, I think it
was like Network 6 or something.

(25:02):
And it actually had a lot more on
board.
There was, we added an onboard SCSI device
to do their tape backup.
So we had to install Novell, right?
Just like you have Microsoft Server.
We had to install Novell Network.

(25:22):
That's what they needed.
So the reason they wanted that is because
they had a, their software they were using
wasn't fully Windows.
It was kind of a hybrid.
So you had to map your printers at
the DOS level, because it wasn't really truly
a Windows.
It was like a hybrid.
It wasn't even, it was probably maybe 20
% Windows, but it was really more operating

(25:43):
in a DOS shell.
So you'd load Windows, but then you had
to load a DOS shell.
Basically, you'd emulate from Windows there.
And so I remember doing the quote up
for the person.
I'm not going to give you their name.
And I gave him the quote.
And let's just say the quote, hypothetically, back
then, let's just say the quote was $6
,000 or $7,000.
With the APC, I think it might have

(26:04):
been like $8,000.
So he looks at my quote.
He looks at my competitor's quote, okay?
And there's about a $1,200 difference or
$1,500 difference between my quote and their
quote.
He's like, well, you got to sharpen your
pencil.
I said, well, let's look and see what
they're giving it.
And I'm looking, I'm looking, I'm looking, I'm
looking.
And then when I get down to the
line where it says Novell software, I don't

(26:28):
see a price.
So I went to him and I said,
you know, our quotes are like identical within
$1,500, I said.
But I noticed that the operating system is
not listed.
On any quote, that's supposed to be listed
separately.
I don't know what he did.
Well, I would go back to him and

(26:49):
ask him, did he include the software for
the server?
So he does, he comes back.
He said, yeah, he included, it's all included.
I said, so is he licensing you a
piece of software in your name with, you
know, software assurance where you'll get updates and
stuff?

(27:09):
And he goes, no, he ain't doing it.
He's just getting a copy from his friend
that we're just using.
I said, whoa, wait a minute.
I said, so he's basically stealing the software.
Well, John, you're putting words in my mouth.
Well, you're not paying for the software, are
you?
Well, no, but I'm paying for the server.
Yeah, but you're not paying Novell for their

(27:29):
piece.
I don't care.
I mean, Novell is a big company.
I don't need to pay them.
Yeah, you do.
Just have to pay Microsoft.
So we started getting a little, not a
friendly debate saying that, okay, so we can't
just give you the software.
He said, well, I'll give you like 150
bucks.
You just, I said, no, no, no, no.
We don't operate like that.

(27:49):
There are a lot of companies that will
operate like that and they'll be gone in
a few years or they're already gone.
I said, I will never play games with
software licensing or with the server licensing or
the workstation licensing.
So at that point, he said, well, we're
going to go with them.
They're cheaper.
So when I see what they did, they're

(28:13):
operating with pirated or bootleg software that's not
licensed.
So that was a problem.
And so we lost that client, right?
We lost that client and it happens, right?
It happens, but I would rather lose a

(28:33):
client that I have to be acting in
a dishonest fashion, all right?
You might say, gee, John, you know, I
was stupid.
And well, the reason I'm saying this to
you is that reputation means a lot to

(28:53):
me, okay?
It means an awful, awful lot to me.
And so I'm not going to pocket money
or make software disappear or copy software that's
not for resale.
I'm not going to do any of that.
If I put software on your machine for

(29:14):
a trial, that's it.
But you're not getting software for free.
You're not getting that.
So we started to realize that I wasn't
willing to play ball is the term he
said.
And he says, well, you know, he says,
you got to learn to play ball.
I said, well, I don't play ball.
Well, that's your problem.
He says, and that's why we're not going
to, that's why we're going to go with
them.
We're not going to use you anymore.

(29:36):
I didn't turn him in.
I didn't say anything badly.
But I just said, I don't operate in
that fashion, right?
I know that I need to do what's
right when people are watching me when people
aren't watching me.
And would somebody ever catch him?
You know, could probably not.
But I don't want to operate in that

(29:58):
environment.
Just like he had software on his computer,
okay?
And he said, well, I'll just use it.
I said, look, I said, I said, whatever
you do, I said, I said, don't tell
me about that.
I said, I don't want to know about
it.
Because if I become privy to it, and
then I'm asked, I'm not going to lie.
So I don't even want to know how
you get the software on there.
I don't want to think about it.
But if I learn about it, we're a

(30:18):
Microsoft partner.
I said, and they asked me, I said,
you know, I'm going to tell them.
I'm not going to go to them directly.
But if I see you doing that, and
then they contact me, you know, I can't
lie.
So I don't want to know what you're
doing.
Don't even tell me how or what you're
doing.
So don't even, so he gets the software
installed.

(30:38):
And sure enough, they put bootleg software on
there.
Again, I have nothing to do with the
software.
But they had problems.
Software stopped working.
It stopped updating.
So you got to fix it.
I said, I said, well, I don't fix
bootleg software.
And I checked the license key on this.
And this key is not licensed to you.
And that was the final straw that said

(31:00):
to him, he can't work with me because
I don't want to play ball.
And by playing ball, it means do you
want to take a few extra bucks, put
it in your pocket, and not say anything?
Like, oh, I'll give you $100, and you
can just make $1,000.
No, I'm not going to do that.
I'm not going to play that kind of
game for you.
And number six this week is a real
interesting one.

(31:20):
Apple introduces age assurance tech as states regulate
social media changes.
So Apple is stepping into the debate over
children's online safety with its new age assurance
technology.
As the US states move towards stricter regulations
on social media usage for minors, Apple's new

(31:42):
feature aims to verify ages more effectively and
enhance child protection.
So this is a great question.
How is Apple going to verify kids' ages
without asking?
So that's a very, very good question.
So when a child turns 13, they can

(32:02):
keep their Apple ID and choose to opt
out of family sharing.
They can also have more control over their
iCloud account when they turn 13.
All right.
So that's when Apple considers them a child,
OK, up until that age.
So changing child's age verification requires a credit

(32:24):
card.
So you might be asking, how else is
Apple going to verify kids' ages?
Well, it's really simple.
They're going to go off the credit card.
Do parental controls turn off at 13?
So parental controls can turn off at age
13 for Google accounts and Apple IDs, but

(32:47):
it depends on the service and the family's
preferences as to whether that will be allowed
or not.
Uh, can you leave Apple family at 13?
Well, any family member over the age of
13 can remove themselves from a family group
at any time.
Just select your name and then select leave
family.
You can also sign into the Apple account
website and choose remove account in the family

(33:08):
sharing section.
Can you delete a child's Apple ID?
Yes, you can delete a child's Apple ID,
but you can't remove a child under 13
from a family sharing group, OK?
So, um, that's important to understand.
So how are they doing?
How would they have

(33:38):
a credit card at that age?
OK, is, is that, is, do you think,
let me just ask you this, is Apple's,
um, way of kid verification good or bad?
What do you think?
Um, so your child will be able to

(33:59):
share music, movies, TV shows, books, applications, photos,
calendars, locations, and more with you and your
family.
I don't like that they can share location
with anybody.
That's a problem.
Will Apple protect kids?
So Apple provides parents and developers with industry
-leading tools that help enhance child safety while
safeguarding privacy.
You can manage your kid's devices and set

(34:19):
app limits for lots of different things.
Screen time, what apps they use, provide who
they contact, provide information about age appropriateness of
apps.
Parents can limit app downloads that exceed age
ratings they have set.
Um, but like I said, when they turn
13, then the things change a little bit.

(34:40):
So, um, Apple, uh, child safety changes, put
more of the onus on the app developer.
Apple's upgrading its app safety offerings, uh, which
will be coming, um, even more as we're
in February, you know, the end of February
and March now, but they're hopefully going to
have this by the end of the year.

(35:02):
So whether these are young children, preteens, or
teenagers, um, their goal is to protect them
from online threats.
Um, so they're trying to be more vigilant
and put that effort in.
The digital world, as we know, is increasing
complexity every day.
And the risk to families and ever-changing
information, including proliferation of age, uh, inappropriate content,

(35:23):
excessive time on social media and other platforms,
writes Apple, uh, a big problem.
So for years, Apple has supported specialized Apple
accounts for kids called child accounts that enable
parents to manage many parental controls that they
offer and help provide an age-appropriate experience
for children under the age of 13.
These accounts are the mainstay of all the

(35:45):
child safety tools that they still offer today.
Launching later will be Apple's new privacy-driven
declared range API.
It allows developers to request an appropriate range,
age range for child account users approved by
parents that can be used to better tailor
app experiences and set access limits for age
restrictions.

(36:06):
For example, they might set a limit for
somebody that is, um, one to eight.
They might send another one from nine to
12, 13.
So they can do that.
Developers can still choose to have their apps
require government identification, but ID will not be
required to use the app store at large.
The policy is moving us toward a significant,

(36:26):
uh, change that is Apple is, um, pushing,
uh, because of all the regulations, um, and
things they're getting hit with about age assurance
verification.
Politicians and tech leaders have offered mixed ideas
about the most effective way to do it
because, you know, we have to make sure,
first of all, that the kids can't get

(36:48):
around this, right?
Apple's updates will also incorporate age ranges into
more streamlined child account setup processes, as well
as making it easier for parents to determine
child safety settings for required users.
So just a little FYI, if you're setting
up an account for a child, um, you
obviously know how it works.
Now, if you set up an account for
yourself, oh, I don't want to give the

(37:09):
correct year.
Don't do that.
I had a client do that.
Like, oh, I'm just going to put in,
like, five years ago, and then they're in
a child account.
So then we had to add them to
a parent account so we could change the
information and then be able to change that
because a child couldn't change it directly.
It was, it was really a mess.
So they're doing this to try to make

(37:29):
the world safer and to protect kids from
getting, um, let's say, their noses into things
that they shouldn't be in any way accidentally.
So Apple's new feature aims to verify ages
more effectively and enhance child protection.
Remember, developers can still require, um, them to
have a valid, um, U.S. federal ID.

(37:53):
Basically, you know, driver's license, passport, et cetera.
And number seven, ladies and gentlemen, Apple faces
a lawsuit over carbon neutral.
Apple Watch claims Apple is facing legal action
over its marketing of the Apple Watch as,
quote unquote, carbon neutral.
Now, critics argue that Apple's sustainability claims are
misleading.

(38:14):
Raising questions about corporate environmental accountability and whether
the company's green initiatives truly live up to
its promises.
And again, I get why this is happening,
but I think it's happening just because it's
Apple.
I don't think it's happening because of the
carbon reason.
I mean, let's face it.
There are companies out there, individuals that just

(38:35):
want to bring Apple down because they do.
Not really for the best, let's say, ethical
reasons.
But nevertheless, if you're doing something, it's kind
of like, you know, going to court, right?
And you're, let's say, trying to defend yourself
for something.
But suddenly you learn about the plaintiff and

(38:56):
you learn that they're guilty because of something
else.
Do you bring that to the trial?
Well, if it helps, let's say, change the
moral or show that the character might be
reckless, then yes, it should be brought.
If it's something that's just something you bring
up, but it doesn't have relevance, well, then
that's not really good to do.

(39:18):
Number eight, ladies and gentlemen, meta lets users
opt out of AI data training.
So in response to growing privacy concerns, meta
is now allowing users to opt out of
having their data used for AI training.
With AI models requiring vast amounts of data

(39:39):
to improve this move, it puts more control
in users' hands, but also raises a lot
of questions about AI's future development.
And I think that's a question for many,
many people today.
Is that happening?
Is it not happening?
And I think that's a big problem.
Because if we just do things to skate

(40:00):
by, we might think, oh, gee, it's not
going to hurt anybody.
But you know what?
The one we're hurting beside ourself is the
people we probably care and love the most.
And that wasn't our intent.
Meta claims to be fixing their Instagram reels
glitch, flooding feeds with violent videos.

(40:20):
So Instagram users were shocked when their feeds
were suddenly overrun with violent and disturbing reels
due to a glitch in meta's algorithm.
I love the way, like many people just
refer to this as a glitch.
This is more than a glitch.
This is like a big, big problem.
The company has since resolved the issue, but
the incident has reignited concerns about content moderation

(40:44):
and platform security.
Again, this is our good friends at Apple,
right?
So we'll have to just see what happens
there.
But I imagine Apple will get sued.
Actually, Meta and Facebook will definitely get sued
many more times.
This is not the last time they're going
to get sued.
Number 10, the U.S. House panel subpoenas
tech giants over censorship and foreign ties.

(41:08):
The U.S. House Judiciary Committee has issued
subpoenas to major tech companies over concerns regarding
foreign influence and potential censorship practices.
Lawmakers are demanding transparency from tech giants as
they investigate how these platforms manage information and
international connections.

(41:28):
Musk, number 11, falsely accuses Verizon of the
aviation system failures.
So this is interesting, right?
After we had the issue from before.
And then he retracts it.
So Elon Musk stirred controversy by falsely accusing
Verizon of operating a faulty U.S. aviation
system.
After facing backlash, he later retracted his statement

(41:51):
in this latest incident as to Musk's growing
list of public missteps that attract scrutiny from
regulators and the media.
I think sometimes these people that are big
in the limelight, they do things, but they
don't necessarily know why they have done that.
And then it's like they got to put
their foot in their mouth, like, oh my

(42:11):
gosh, like, what did I just say?
Like, what am I even talking about?
So we're just going to have to follow
it and see what happens.
Number 13 for this week, ladies and gentlemen,
AI tools finds epilepsy-related brain lesions missed
by many doctors.
A breakthrough that recently happened with AI and
AI tool is now revolutionizing epilepsy diagnosis by

(42:34):
detecting brain lesions that even experienced doctors have
missed, unfortunately.
This technology could lead to more accurate diagnosis
and better treatment options for patients suffering from
neurological disorders.
So I know what you're probably saying to
me, John, how are they using AI to

(42:55):
detect brain lesions?
So the how is it's being used to
detect by analyzing medical images like MRIs and
CT scans.
It enables faster and more accurate diagnosis, even
helping to identify subtle changes over time.
And there's more details because AI is being

(43:17):
used to detect the brain lesions.
And so they're constantly updating their AI algorithms
for this.
Examples such as the MELD graph, a tool
that can detect brain lesions that doctors miss.
As I said, the MELD graph.
There's the deploy and AI tool that can
analyze images of brain tumors to predict their
methylation state and infer the tumor subtype if

(43:41):
it exists.
Deep glioma, an AI-based diagnostic screen system
that detects a brain tumor's genetic mutations in
under 90 seconds.
So I've said to you guys before, it's
great that we have this technology, but we
never want to replace the middleman.
We always want to keep somebody in the
loop.
I think that is probably one of the

(44:03):
most important things that I can tell you.
We always want to make sure that that's
what's happening.
If it's not, then we're just allowing the
machine to make decisions.
And that's not really good.
We've got to keep, ladies and gentlemen, the
human in the loop or two or three.

(44:24):
I think it's okay to use technology to
automate a process, but it should not be
100% autonomous where it completes the whole
thing itself.
We have to look at what it is.
I mean, one thing, if it's a production
line, that's fine.
But when we're talking about health or humans
or emotions or anything that could affect somebody's

(44:47):
safety, I think we really got to be
careful about that.
That could be things like regulating our utilities,
et cetera.
So I think that's really, really important.
And ladies and gentlemen, Microsoft 365 outage disrupts
not only Outlook and cloud services worldwide, millions

(45:07):
of users were impacted as Microsoft 365 suffered
a massive outage, disrupting access to Outlook and
other cloud-based services.
Businesses and individuals faced major conveniences, highlighting the
growing reliance on cloud platforms and the risks
of widespread downtime.
Now, I know this might sound like a
lot of hodgepodge, but the truth of the

(45:30):
matter is, ladies and gentlemen, it's not hodgepodge.
Many people think that if we treat it
like this, maybe it'll just go away.
Well, I got news for you guys.
It's not going away anytime soon, unfortunately.
So on March 1st, Microsoft 365 service, including

(45:52):
Outlook, experienced a global outage that affected users
in New York City and Toronto with around
35,000 incidents reported.
The disruption, which began at 4 p.m.
Easter time, prevented access to emails and other
Microsoft 365 applications, frustrating so many users.
Microsoft traced the issue to a problematic code

(46:14):
update and quickly reverted it.
Of course, they restored most services very quickly.
It was actually done by late afternoon.
The company continued monitoring the situation to ensure
stability, highlighting the vulnerabilities of cloud-based services
and their significant impact on global communications.
Wow, that is a lot of information, let

(46:36):
me just tell you.
And I think as we get into more
with the tech war, what I'll call the
tech battles, it's focusing around AI, okay?
It's focusing around AI.
And AI will give us many tech battles

(46:58):
because sometimes people implement it too quickly.
They don't have safeguards in place.
Some people will say AI is changing the
war.
Bill Gates said this a long time, and
I quote, artificial intelligence, AI has the potential
to revolutionize many aspects of life, but it
also poses risks.

(47:20):
People say, will AI win the next war?
So history has showed that the true test
of military power is not mere technological sophistication,
but the ability to integrate technology with other
elements of national power to achieve strategic objectives.
In fact, we learned from the Mark I
in the Harvard architecture that it was used

(47:40):
for helping to produce the atomic bomb.
In the broader context of our world, AI
is an important tool, but not a guarantee
of future victories in the conflicts that may
arise.
So people ask every day, will AI benefit
us or harm us?
So AI provides numerous benefits, such as reducing

(48:02):
human errors, time-saving capabilities, digital assistance, and
unbiased decisions.
However, the disadvantages include emotional intelligence, encouraging human
laziness and job displacement, and AI program is
capable of learning and thinking.
The same way a human is, I don't
really believe it's the same way a human
is.
Will AI replace tech support?

(48:22):
By 2027, AI will be providing more IT
technical support, at least in written form, than
humans.
The CEO of cybersecurity company, Palo Alto Networks,
Nikisha Arora said earlier, and I quote, this
year that it has reduced IT support by
nearly 50% with an expectation that it

(48:43):
will be reduced by 80%.
Well, I have a problem with that, because
I think it's good that we have technology,
but I don't think it should ever replace
humans.
And I get that there's certain jobs out
there that we probably don't want it to
be in because of safety issues.
I get that, right?
I definitely get that.
But if we do not understand some of

(49:06):
the limitations and we don't understand some of
the risks, how are we ever going to
grow?
How are we ever going to move from
a point A to a point B?
And I think that's probably the biggest thing
I want to say about that.
Artificial intelligence, ladies and gentlemen, it's not good.
It's not bad.
It's a tool.
And how we choose to use it that

(49:27):
makes it so, all right?
Jeff Bezos had something interesting to say.
Quote, a thousand AI applications in development.
That's what he said.
Tim Cook at Apple has said that, quote,
Apple doesn't see AI as separate from its
other products.

(49:48):
It's part of them and has been for
years, close quote.
Mr. Elon Musk is saying, quote, he sees
AI exceeding human intelligence in the next year
or two.
I don't know if I see that.
Will AI take control of the world?
If you believe science fiction, then you don't
understand the meaning of the word fiction, right?
The short answer to this fear is no,

(50:09):
AI will not take over the world, at
least not as depicted in the movies.
AI will have a lot of decision-making
capability.
People ask, will chat GPT replace humans and

(50:29):
customer service?
The short answer to the question is no.
Not every customer wants to interact with a
chatbot.
And there are plenty of tasks that should
never be automated.
I think we can all agree with that
situation.
But many times people just, I don't know
why, they get fearful around any kind of

(50:50):
technology, whether we're talking about here an iPhone,
whether we're talking about a lot of different
things.
And I think that's a huge problem for
many, many people is that they look at
the fact of what this can be, right?
That's what they look at.
And if we look at how it can
be, then it's important to understand that there

(51:15):
is a big possibility that we can make
amazing changes in our lives.
And when I say big changes, I mean
that we have the ability to do some
amazing things, okay?
We have the ability to do some amazing,
amazing things.
And so as we evolve, as AI tends

(51:38):
to morph, people say to me, John, what
is the next stage in AI?
The next stage in AI is probably the
rise of AI agents and collaborative systems, going
from simple information retrieval to more complex, unified

(52:00):
workflows across many domains with focuses on modemodal
capabilities.
And real world applications.
So I know this sounds like something that's
really, I don't know, a little bit crazy.
But I think the most important thing that
we have to understand is that AI is

(52:23):
not going away, okay?
So we have to embrace AI and know
that it can help us, all right?
But know that AI is meant to be
a tool, meant to be a tool.

(52:45):
A little bit of water here, guys.
And so, although AI can give us some
great ideas of optimizing, people ask me this
all the time, John.
You know, AI is out there.
And this is a great question that I
get asked many times from clients.
What is the best use of AI?

(53:08):
That's a tough one.
And it's gonna be leveraging things like business
and productivity, automation, smart assistance, like your chatbots,
things like that, data analysis, decision-making, healthcare
and science, medical diagnosis, drug discovery, personalized treatments,

(53:29):
cybersecurity and fraud, education and personal development, AI
tutors, language learning, sustainability, and environmental impact.
AI for disabilities, speech-to-text, text-to
-speech, AI-powered prosthetic, improve accessibility.

(53:54):
So we're trying to add AI on as
a, I guess the best way to say
this is as a piece that's gonna allow
us to evolve.
And so, you know, AI in its own
right is not directly powerful.

(54:16):
It's how we as humans, programmers, developers, and
engineers make the changes.
Ethical AI and humanitarian efforts, bias reduction, disaster
response.
We've already had challenges in those areas.
So I don't think these are the areas

(54:37):
that we're gonna see tomorrow.
I just don't see it tomorrow.
But when we talk about things like, you
know, Mr. Elon Musk finding himself under scrutiny,
I think he believes just because he has
a few extra dollars that he could do
whatever he want, that he could do no
wrong.

(54:57):
And that's what I tend to keep hearing
over and over again, right?
The real question I have is, you know,
why, you know, what is the real reason

(55:18):
Microsoft is shutting down Skype?
So they're retiring Skype.
They wanna prioritize teams.
And this is according to an official blog
post.
They wanna transition over.
So you can use Microsoft Teams free version.

(55:40):
And this is a great question, John.
Difference between Microsoft Teams free and paid.
The main difference between free and paid is
the features.
The free version is good for small business
and nonprofits, while the paid version offers a
lot more flexibility.

(56:00):
So the free version of Teams is free
to use for anyone without the Office 365
subscription.
It lacks the core features such as meeting
recording and phone calls and scheduled meetings.
The limit of the maximum number of users
which can be added is 500,000 users
per organization.
Storage space is limited to two gigabytes per
user and 10 gigabytes of shared storage.

(56:23):
So that's a big, that's a real big
thing there, guys.
I mean, that's a real difference between these,
I mean, depending on whether you need to
have that or you don't need to have
that, I think that's a, I think that's
really a big thing.
There's no 24 seven mobile and web support
in the free version.

(56:43):
The paid version comes with bundled with Microsoft
365 suite of products and services.
You also get scheduled meeting feature.
It's integrated with the exchange calendar.
You potentially, there's no limit to the number
of users to be added in the enterprise
plan.
And the paid version provides one terabyte of
storage per user, per user, okay?

(57:05):
That was only two gigabytes.
It also provides 24 seven support for both
mobile and web.
And that's for other administrative tools, by the
way.

(57:33):
I don't know the answer to that.
And I think this is the big problem
I have, whether we're talking about Microsoft, whether
we're talking about other companies.
My problem is, are we really talking about
what we need to talk about?
Ladies and gentlemen, I want you to know
something.
I am John C.

(57:54):
Morley, serial entrepreneur.
It's been a great show with you.
I hope you guys have an amazing rest
of your day.
And I'll catch you guys real soon.
Advertise With Us

Popular Podcasts

United States of Kennedy
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.