Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:04):
Hi everyone, I'm John Seymour, the host of
The JMOR Tech Talk Show and Inspirations for
Your Life.
(00:57):
Well hey guys, welcome to The JMOR Tech
Talk Show.
It is great to be with you here
on episode number five with series four.
We have lots of great stuff to talk
about here tonight.
I hope you guys have been enjoying the
show here that I put on every single
(01:17):
week on Inspirations for Your Life, which is
so great to be with you.
I know there's so much amazing stuff out
here and I know that you're really going
to benefit from everything that we offer here,
so I hope you guys definitely enjoy the
show and let's just kick this off.
(01:38):
All right, everyone?
All right.
So welcome everyone.
If I haven't welcomed you enough, do check
out BelieveMeAchieve.com, where my amazing inspiring creations,
of course, 24 hours a day, but welcome
everyone to The JMOR Tech Talk Show.
Great to be with you here.
Our title for tonight, which I think you
guys are going to find pretty apropos, AI
(01:59):
Showdowns, Big Tech, Moves, and Future Innovations.
Let's catch the latest episode here tonight as
I unveil some amazing stuff.
The JMOR Tech Talk Show converts into a
audio cast within 24 hours or less, so
you always can catch it off on many
of our other platforms, which is very easy
to find at BelieveMeAchieve.com.
(02:21):
And we're diving deep into the latest tech
trends and news that is shaping our world,
from legal battles to cutting edge breakthroughs.
Let's get ready for an exciting episode packed
with insights.
So let's get started.
Well, first thing I want to crack the
egg on is going to be, yes, Mr.
(02:43):
McCartney.
This is a very, very interesting thing, if
I have to say so myself.
So McCartney has warned the UK government against
allowing artificial intelligence to rip off artists, urging
stronger copyright protections for creative industries.
The Beatles musician expressed concerns that AI could
(03:04):
exploit artists by using their work without compensation.
McCartney highlighted the risk of tech giants benefiting
from this, while the creators of the original
content lose out.
The UK government is currently reviewing its copyright
laws, with some proposing that the artists should
be able to license their work for AI
(03:24):
training.
McCartney emphasized the importance of ensuring earnest routine
ownership and fair compensation for their work in
the face of advancing AI technology.
So unfortunately, what's been happening with artificial intelligence
is a lot of people have been exploiting
it.
They've been using people's content without their permission.
(03:48):
Forget paying them, not even asking their permission
if they could do it.
And it's not a not-for-profit.
We all know the AI is going to
go into a for-profit.
So it's becoming a real tobacco.
And I agree with what McCartney's saying.
And this is one of the reasons I
don't want to create myself as an AI,
because the thing is, how do you control
it?
How do you manage it?
(04:08):
How do you monitor it?
So you really got to be very careful
with your likeness, right?
Your images, your voice.
I don't even like to use my voice
when it comes to these things from the
different banks out there, right?
Many of the major banks offer voice verification.
And I got to tell you something, ladies
and gentlemen, voice verification is terrible.
You should not use voice verification with banks.
(04:33):
And let me just tell you why.
So the thing is, if somebody knows your,
let's say, account or information, they can actually
use your voice and try to get into
the account and make changes because they have
used voice verification.
(04:54):
The person doesn't go over any more details.
And they'll just answer any questions you want.
So that's a really, really big problem.
And with some other news, which I think
is very interesting, well, Meta is starting to
unveil something.
Meta is supposed to test ads, which they
currently are going to be rolling out just
(05:14):
this past January 24th, on threads in the
US and Japan.
So Meta platform announced it would begin the
testing just a few days ago, January 24th,
and the goal of expanding its monetization options.
So image ads, starting as it did on
January 24th, were basically placed between content posts
(05:37):
for a select group of users.
Meta plans to closely monitor the test and
allow businesses to extend their existing ad campaigns
to threads.
The company is also introducing an inventory filter
using AI to help advertisers control where their
ads appear.
Despite not expecting threads to significantly impact revenue
in 2025, Meta aims to capture more advertising
(06:00):
spend amid ongoing volatility and rival platforms like
TikTok.
I have to tell you, we had used
Facebook a while to do advertising, but they're
not where they used to be.
I think it's because they've created all these
filters, and it's causing a big problem.
I think the best way to reach people
is through the personalized contacts.
(06:24):
Those are things like when they sign up
for email addresses, trade shows, things like that,
personal events.
So we'll have to keep tabs to see
what they're doing, but I got to tell
you something right now, the stuff that Facebook
is doing, I just frankly don't trust it.
(06:46):
And so they're saying it's going to significantly
impact revenue 2025.
I don't know.
I don't know if it's really going to
do what they say.
I don't believe a lot of what Microsoft
says, to be quite honest with you.
All right, and here's one for the books.
Open AI is being sued by publishers in
(07:07):
India.
What the heck is this all about, John?
Yeah.
Indian book publishers, along with their global counterparts,
have chosen to file a copyright law suit
against the Open AI in New Delhi.
The case focuses on concerns that Open AI's
chat GPT tool uses, quote unquote, copyrighted content,
(07:29):
including books, to produce summaries without authorization.
Publishers, including major names like Bloomsbury, Penguin, Random
House, argue that the AI-generated summaries harm
their sales and creative work.
The case, which is part of a broader
global legal challenge against AI companies, could influence
(07:49):
India's legal framework for AI use.
Open AI has denied the claim, stating that
its use of publicly available data falls under
fair use.
Yeah, I don't know about that.
I think this is a little gray, and
I think we're going to see the rules
(08:09):
changing on this.
I mean, I think you've got to be
very careful about using AI and where the
data comes from.
I've just been seeing so many mistakes with
AI, it's not even funny.
The other day, I was just doing a
test with it to see if it could
do some quick calculation for making change.
And so, I purposely gave it the wrong
(08:31):
answers, and guess what it told me?
You're correct.
I'm like, no, I'm not.
Oh, I apologize.
You're right.
That's not the correct answer.
So, I don't know.
I think these chat engine companies, I think
they're not focusing on quality.
They're focusing on quantity.
(08:52):
And that's a big concern for me, as
I'm sure many other people.
But this whole thing with the little caper
of what's going on with the publishers in
India, this is something that's very, very real.
And it's something, ladies and gentlemen, how these
foreign countries are being more, let's say, proactive
(09:14):
with this.
So, we're going to have to see what
happens.
But I got to tell you that this
is going to set the precedent for the
rest of the country.
And it's not going to be the first
time that we have an AI challenge with
people stating that the content it provides is
basically copyrighted.
(09:35):
So, that's a big problem, ladies and gentlemen.
And I think we just got to be
cognizant and mindful of the dangers of AI
in the fact that some of the information
on there, well, let's just say it might
not be properly licensed.
And if you go and use that content,
well, you're going to get more than just
a slap on the wrist, if you know
(09:56):
what I mean.
Well, Google boosts their efforts to combat fake
news after a UK probe at them.
The Google has committed to taking stronger actions
against fake reviews following an investigation by the
UK's Competition and Markets Authority.
CMA has part of the crackdown now.
(10:17):
Google will sanction the UK businesses and individuals
involving in manipulating star ratings and place warnings
alerting on business profiles linked to face reviews.
So, this comes after the CMAs, which is
the Competition and Markets Authority's ongoing investigation, which
actually started in 2021.
(10:39):
It highlighted concerns about the influence of fraudulent
reviews on consumer spending.
And the CMA is set to gain additional
powers very soon to enforce the consumer law.
And they're saying in April, independently without needing
to take cases to court.
So, I think this is very interesting.
(11:00):
And the fact that Google is now trying
to clamp down, because I lost a lot
of respect for Google, all right?
Not to get into that whole story here,
but a while back, we decided to hire
Google because we thought that by hiring Google,
it would take us to the nines.
What it did was it started to take
us to the cleaners and not to clean
(11:22):
our clothes, to just basically take our money.
So, we decided that we're going to drop
them.
I remember them being so arrogant and so
nasty.
And then the minute we asked questions like,
well, if you're not going to do that,
we're not going to work with you.
Like, who the heck are you?
You're Google.
You're a search engine.
Like, who the heck are you?
So, you guys have known them because they
(11:42):
make a lot of money.
But I think things are going to change
with the search engine world in the next
couple of years.
And right now, we already see Google trying
to get into the AI world.
Well, I didn't trust them in the search
world, and I certainly don't trust them with
AI data.
We all know how underhanded they have been
in the past.
And this is fact.
This isn't my speculation.
(12:03):
Google has been very underhanded in the past.
And so, I don't think they've changed.
I think a lot of times, now that
they're getting hit with these big lawsuits, they're
trying to kowtow.
(12:23):
But I don't think they're really going to
change their ways.
They're going to try to make people think
they are, but they're really not.
They're not fooling me.
And ladies and gentlemen, here's a real cool
one.
The European Union is set to test the
social media on disinformation before the German election.
Yeah, check this one out, ladies and gentlemen.
(12:44):
So, the European Commission has invited major social
media platforms, including Facebook, TikTok, and X to
participate in a stress test, which is actually
today, January 31st, 2025, aimed at assessing their
efforts to combat disinformation ahead of the German
federal election.
This test is part of the European Union's
(13:05):
Digital Services Act, DSA, and will simulate various
scenarios to evaluate how these platforms will respond
to disinformation risks.
TikTok has confirmed its participation, while others, including
MetaSnap and Microsoft, have not yet confirmed.
This marks the first such test held for
a national election following a successful trial before
(13:27):
the European Parliament elections.
The test will be conductively run privately, which
they did today, with the collaboration from the
German authorities.
Now, I have to believe that Facebook and
these other companies did pop in.
I mean, when we had done our research,
we found out that they weren't, but I
would probably say at the last minute that
these other platforms were going to jump on
(13:48):
board, because if anything, it's going to help
them if they're transparent about stuff.
I think the problem with a lot of
these social media platforms, ladies and gentlemen, is
that they're not transparent about who they are
and about what they do.
Now, that's not fact.
I was having a conversation with a lady
the other day, and telling her I was
a journalist and all that.
(14:08):
Some people, they get a little huffy and
puffy.
I don't tell people I'm a journalist to
brag, and that I'm a member of international
press to brag.
I tell them that because I want them
to understand how serious I am about writing,
how serious I am about videography, about journalism.
Right away, she was taking me down, and
(14:29):
I didn't try to defend it.
I just said, well, that's what I do.
She said, well, that's not really journalism.
I said, well, I just tell people the
way it is based on facts.
I could tell she was just one of
these people that got very uptight because it
wasn't her way.
I don't know.
(14:49):
I would say you're going to like me,
you're going to love me, you're going to
hate me, and I'm going to keep going
on, because I'm going to tell the truth.
I'm going to tell based on what I
have found through sources.
I'm not going to make it up.
If I'm going to stay in the pain,
I'll let you know, but most of the
times I tell you things on fact.
I tell you why I don't like something
if it's above and beyond the fact, but
I have good grounds for when I do
(15:11):
things like that.
People are still reminiscing about that lights out
of TikTok not too long ago.
It actually happened, ladies and gentlemen, if you
guys remember, that was actually on January 19th,
which was, yes, 1, 2, 3, 4, 5,
a week from basically this past Sunday.
(15:38):
I think a lot of people were very,
how can I say, distraught when TikTok's partners
in the US turned the lights off because
they didn't want to get hit with all
these fines.
The question, ladies and gentlemen, is will they
still have a TikTok?
(16:00):
That's a very, very good question.
I don't have the answer for you.
I know Mr. Donald Trump would like to
basically have TikTok pay the United States $100
million because he's valuing it at $200 million.
I think that's an issue, and I think
a lot of people might be interested in
(16:22):
the investment, but TikTok is being very selective
in who they want to buy TikTok.
I don't know, ladies and gentlemen, I think
that's a touchy subject.
It's probably the best way to put that.
Will Mr. Musk, Mr. Beast, or Ellison buy
(16:46):
TikTok?
Who's it going to be?
Several prominent figures, or will there be more,
including, like I said, Elon Musk, Mr. Beast,
which we know is Jimmy Donaldson, and Larry
Ellison, who expressed interest in purchasing TikTok, as
the platform is facing some potential political pressures
in the United States due to national security
(17:08):
concerns related to its Chinese ownership.
Now, last year, President Biden's administration gave TikTok's
parent company, ByteDance, until January 19th to sell
the app or face a possible ban.
Among the potential buyers, Musk, already the owner
of X, formerly Twitter, has voiced opposition to
(17:28):
a TikTok ban.
Trump has supported the idea of Musk or
Ellison taking ownership with Ellison's Oracle already managing
TikTok's servers.
So that's an interesting, a very, very interesting
thing, as I'd like to say.
(17:50):
Meanwhile, other investors, like Frank McCourt, have proposed
purchasing TikTok without its Chinese-developed algorithm, aiming
to run the platform on US-controlled technology.
Ultimately, President Trump may play a significant role
in selecting a US buyer.
But the question that everyone still keeps asking
is, what benefit will TikTok do for the
(18:12):
United States?
And I think that's a very, very good
question.
We really don't know what that answer is
at the current moment.
But we know that there's a lot of
people, let's say, got very upset when this
platform went down for just less than a
(18:32):
day, right?
So the question is, could America exist without
TikTok?
I mean, could they?
The countdown's on again, isn't it?
I think there are a lot of benefits
(18:54):
to TikTok.
But I do feel that it needs to
be regulated.
That's the thing.
I don't think it's the Chinese that are
running it that's the problem.
I think the problem is that no one's
really policing the platform.
I've got to say that if you report
something, they respond to it immediately.
(19:15):
But I still think there needs to be
another level, more transparency.
I just feel that there's just too much
of people getting hurt.
And although they're trying to resolve it, I'm
not 100% certain that they have or
(19:36):
that they'll be able to.
I think the US will make a huge
difference if they should decide to buy it.
Now, out of the $200 million valuation, the
$100 million valuation would be paid back to
the United States government.
The question is, how much does TikTok work
without the algorithm?
I think it was like $40 million.
So if they don't get the US to
(19:57):
buy TikTok, TikTok has really very little value.
And we're just going to have to see
what happens.
But I'm very curious to know, when will
the TikTok, let's say clock, tick again?
(20:19):
So we had the January 19th day.
So when will TikTok possibly be suspended?
Well, we don't know exactly.
But we know that Trump signed an executive
order to suspend TikTok ban for 75 days.
That's what he said.
So we signed an executive order to keep
(20:41):
TikTok operating for 75 days, a relief to
the social media platform users, even as national
security questions persist.
TikTok's China-based parent company, ByteDance, was supposed
to find a US buyer or be banned
on January 19th.
Trump's order could give ByteDance more time to
find a buyer.
But are they really going to do this,
(21:02):
or are they just going to milk this?
According to Trump, he said, and I quote,
I guess I have a warm spot for
TikTok.
Well, I think the reason he has a
warm spot is because it actually helped him
get a lot of votes.
A lot of the younger votes came out
through TikTok.
As you know, the platform right now, yet
it has 170 million US users that were
(21:26):
not able to access TikTok for more than
12 hours between Saturday night and Sunday morning.
TikTok has been able to give Mr. Trump
15 million followers since he joined last year.
So, you know, the platform went offline before
the ban approved by Congress, which was done
not by TikTok, but by the US companies
(21:49):
that run it.
And, you know, after Trump promised to pause
the ban recently, TikTok restored access for existing
users.
Google and Apple, however, still have not reinstated
TikTok to their app stores.
So business leaders, lawmakers, legal scholars, and influencers
who make money on TikTok are watching to
(22:10):
see how Trump tries to resolve this thicket
of a wicket of regulatory challenges in the
legal, financial, and geopolitical issues, which are tending
to get bigger every day.
How did the TikTok ban come about?
Well, TikTok's app allows users to create and
watch short form videos and broke new ground
(22:31):
by operating with an algorithm that fed viewers
recommendations based on their viewing habits.
But concerns about its potential to serve as
a tool for Beijing to manipulate and spy
on Americans predates Trump's first presidency.
In 2020, Trump issued executive orders banning dealings
with ByteDance and the owners of Chinese messaging
(22:52):
app WeChat.
Courts ended up blocking the orders, but less
than a year ago, Congress overwhelmingly passed a
law citing national security concerns to ban TikTok
unless ByteDance sold it to an approved buyer.
The law, which went into force not too
long ago, allows for fines of up to
$5,000 per US TikTok user against major
(23:13):
mobile app stores, like the ones operated by
Apple and Google, and the internet hosting services
like Oracle if they continue to distribute TikTok
to US users beyond the deadline for ByteDance's
divestment.
So right now, they're kind of in this
temporary okay period where they're allowed to, but
Apple and Google are just not really feeling
(23:33):
it.
Or maybe they're doing this intentionally.
Maybe they're hoping it doesn't stay around because
TikTok actually hurts Google and it actually hurts
it hurts Apple, which unfortunately.
So it's interesting what's going on.
And what's going to happen right now, we
(23:55):
really don't know.
Trump is working on different speculations.
And the question is, who will buy TikTok?
And I think the biggest issue right now
is, who is it that TikTok, ByteDance wants
(24:16):
to sell to?
And I think that's the biggest challenge we
have right now.
But Trump is definitely working on some things.
And we'll have to see what happens.
And the person that buys it, my only
concern is that, are they going to use
it for the right reasons?
Or are they going to exploit people by
making this a Mickey Mouse platform for their
(24:38):
own personal well-being?
I don't know.
I like the idea of the US policing
TikTok and getting $100 million to do so.
I think that's great that it's actually being
shared.
I think that's an amazing thing.
And it's a great way of ensuring that
TikTok is able to work and make sure
that things are pretty much on the up
(25:00):
and up.
So here's one I think you're going to
find very interesting.
This is a new type of therapy.
One that I think may just blow your
mind.
What kind of therapy is it?
Well, it's an ultra-fast cancer treatment, and
it may replace radiotherapy.
That's pretty cool.
(25:21):
So this is a groundbreaking new cancer treatment
known as Flash.
No, not the flash drive that you have
with your computer.
Not that one.
And they say it could revolutionize radiotherapy by
delivering ultra-fast high-dose radiation in under
a second, offering fewer side effects than conventional
therapies.
(25:43):
Developed through the experiments at CERN, the Flash
method has shown promise in destroying tumors while
sparing healthy tissue with early trials on animals
and humans, showing fewer adverse effects than traditional
radiation.
Now, this approach has the potential to treat
difficult cancers such as metastatic tumors and gliostomas,
(26:06):
and could become a little safer.
More accessible options globally, particularly in low-income
areas, where radiotherapy access is limited.
However, challenges remain in developing the necessary equipment
and making it available to all patients, as
current machines are large and unfortunately expensive.
Researchers are optimistic that the new advancements could
(26:28):
lead to more accessible and effective treatments for
a wider range of cancers.
So this looks like there could be a
strong potential in allowing this to move forward.
Looks like it.
We're just going to have to see, you
know, what happens.
And I think anytime that we're able to
pioneer and, you know, make changes in the
health world, I think that's an amazing thing.
(26:49):
But the question is, are we doing the
right thing?
And will this benefit people, or will this
be something that's out of their reach?
I mean, I just read not too long
ago that some major services by a few
hospitals are literally being restricted.
Why?
I think it's because they're not getting enough
money in their pocket.
Let's be honest.
Everything is about money, even the health industry.
(27:12):
So we'll keep an eye on Flash and
see what they're doing.
And we'll definitely update you once we know
what's happening with that, because I'm really curious
to know if that's going to take off
in the US, or if it's going to
be a complete flop.
And under the Trump administration, the US cyber
defense faces some leadership loss.
Ouch.
So yes, under the Trump administration, the US
(27:33):
Cybersecurity and the Infrastructure Security Agency, CISA, they
love acronyms, as you know, Cybersecurity Infrastructure Security
Agency, face significant challenges as it's grappled with
cybersecurity threats, including Chinese hacking groups like Salt
Typhoon.
Jen Easterly, who led the CISA from 2021
(27:54):
until inauguration day 2025, warned of the agency's
uncertain future as rumors swirled about its possible
downsizing or elimination.
Despite limited resources, Easterly emphasized the agency's critical
role in protecting US infrastructure from cyber attacks,
citing her team's efforts in detecting and addressing
(28:15):
intrusions.
As Easterly departed, she reflected on the growing
threats, especially from nation state actors and the
importance of global cooperation in cybersecurity.
So I think the big issue that I
see with cybersecurity is that they throw a
lot of money at something, right?
They think they're working toward a solution, but
(28:39):
everyone's just kind of going their own direction
and there's no real organized plan.
That's at least what I've seen, is that
I don't see an organization, I just see
this complete mess where everybody just kind of
goes whatever way they want and we don't
really, I don't know, Irish, you don't see
a conscientious effort to do anything.
(29:00):
And ladies and gentlemen, the LA fires are
finally out.
The cause may remain unknown, but here's something
that I think might excite you a little
bit.
AI is now searching for clues.
Yeah, we use AI for a lot of
stuff, don't we?
(29:20):
So the cause of the recent Los Angeles
wildfires remains uncertain with many potential factors contributing,
including dry conditions, powerful winds, and fallen power
lines.
As investigations continue, the US Forest Service is
collaborating with computer scientists to harness AI in
identifying fire causes.
(29:41):
A study revealed that over 50% of
the wildfires in the Western United States goes
unsolved, hindering prevention efforts that seem to be
around.
AI models are being used to analyze past
data and predict likely causes, such as human
activity or lightning, aiming to improve fire risk
(30:05):
prevention and public awareness.
While AI offers promise, it is still a
work in progress showing potentials for proactive interventions
to safeguard vulnerable communities.
So, you know, it's interesting what they're doing.
You know, using, I mean, it's a very
(30:27):
interesting thing.
I mean, using AI to detect the cause
of the LA fires is interesting, right?
And so I think, you know, what they're
trying to do is use stuff like, you
know, drones and different sensors and use high
(30:47):
tech to really decide what's going on.
But the question is, are they really going
anywhere?
I don't know.
Or is it just some big hype?
Again, I don't know.
But it always sounds good when we can
throw technology in there, right?
And so they're using AI to hopefully prevent
(31:12):
fires.
So for instance, AI algorithms can analyze patterns
and data from smoke detectors, temperature sensors, and
other IoT devices to predict where and when
a fire is most likely to occur.
So that's kind of where they're going.
They're kind of backing in from that.
And this predictive capability can enable proactive measures
to prevent fires before they start.
(31:33):
So not only are we using them to
figure out what started the fire, we're using
them to figure out how to prevent fires
from starting.
So I think that's a very interesting thing.
And so we all know that AI is
not 100% accurate all the time.
So we have to build a lot in
(31:53):
there to make sure that what we're doing
is going to be giving us the right
answers.
So the question you may be asking is,
how can AI help us battle the wildfires?
And so it's a very good question.
Wildfire is natural in many forests, yet human
(32:17):
-caused climate changes intensifies the heat and drives
fires, tripling burned areas in California, the mountains.
So they're trying to map the wildfires.
They're using ultra high definition AI camera systems.
Which they've been doing in the past, incidentally,
it's not new.
They're detecting red flags.
(32:39):
Not too long ago, an algorithm used by
Pano AI detected smoke from a lightning strike
that caused a remote high risk fire on
Bennett Mountain in Douglas County, Colorado.
So they're doing different things.
And AI and also being used for climate
change.
But I think the real question comes down,
is this something that's going to give us
(33:01):
a noticeable difference?
Or is it going to be something that
is just going to, I don't know, waste
time?
I think it has potential.
I will definitely say that it has potential,
right?
I definitely would say that.
So we'll keep an eye on what's happening
with the fires.
So I got a question for you guys.
How many of you out there use YouTube?
(33:23):
Oh, I feel bad for you.
Oh, I use it too.
But I feel bad for you because there
are lots of frustrated users that are using
YouTube.
And they're demanding answers because they're having to
put up with sometimes hour-long unskippable ads.
Now, YouTube users are complaining about the unfortunate
(33:45):
experience they're encountering that's very long with the
unskippable ads.
Now, some are lasting up to an hour
or more.
Now, that's not the norm, but they are
getting some of those cases.
And it's disrupting their viewing experience.
Now, while some Reddit users claim to have
seen ads lasting up to 90 hours, that's
preposterous, it's suspected that ad blockers may be
(34:06):
causing these extended ads by either failing to
skip them or blocking the skip button.
In response, YouTube stated that such incidents may
be related to attempts to block ads as
the platform is actively discouraging the use of
ad blockers and pushing for users to either
disable them or subscribe to YouTube Premium for
(34:26):
an ad-free experience.
Google also hinted that such issues may be
used to deter persistent ad blocker users.
I don't know, I think this is like
crazy, the stuff that's going on with this.
And I feel that many people just are
okay that Google can do whatever the heck
(34:48):
they want.
Even if it's something that's not ethical or
something that's like unfair, well, we're Google and
we can do whatever we want.
And I get so tired of hearing that
because we are Google, because we are Google,
we can do whatever we want.
(35:13):
I have a problem with that.
Because we are Google, we can do whatever
we want.
I have to say no.
I think Google was starting to really get
(35:33):
handed their head.
They've been trying to play this high, mighty
card, but we're finding that they're getting into
a lot of trouble.
Google continues to get sued.
(35:53):
Just a couple of days ago, Google lost
a bid to dismiss US state's lawsuit over
digital ads.
I mean, this is insane.
But Google has to learn that they do
not have the right to do whatever they
want.
And I think the more that they get
hit, the more that they're going to learn
(36:15):
that things are going to get bad for
them if they don't start behaving.
Apple wants to help Google defend search engine
deals worth billions.
The Apple wants to defend its multi-billion
dollar search engine deal with Google, which is
(36:37):
in danger because Google has been found guilty
of violating antitrust laws.
So Apple has asked the court handling Google's
lawsuit with the US government for an emergency
stay so that Apple has time to intervene
and plead its case before a remedy is
decided.
The US Department of Justice sued Google for
anti-competitive behavior in a search market back
(36:58):
in 2020.
And after a lengthy battle, the Department of
Justice won.
A main component of the lawsuit was Google's
deal with Apple, which sees Google pay billions
annually to be the default search engine for
Safari.
The court decided that the agreement between Apple
and Google violate antitrust law.
(37:20):
And as a major reason, Google has been
able to maintain its search engine monopoly.
The US government asked the court to bar
Google from entering into contracts with Apple, among
other restrictions, and that will cost Apple a
lot of money.
In 2022, for example, Google paid Apple 20
billion.
Apple already asked the court to allow it
(37:41):
to be more involved in the case as
remedies are decided on, and the court denied
the request due to timing.
Apple appealed the decision and is asking for
a stay while the appeal plays out.
Apple says that because its deal with Google
is at stake, it deserves a right to
participate, and without a stay, it will suffer
clear and substantial irreparable harm.
(38:03):
So here's something interesting that I quote from
Google.
Apple will be unable to participate in discovery
and develop evidence in the targeted fashion it
has proposed as the litigation progresses toward a
final judgment.
If Apple's appeal is not resolved until during
or after the remedy trials, Apple may well
be forced to stand mute at trial as
a mere spectator, while the government pursues an
(38:26):
extreme remedy that targets Apple by name and
would prohibit any commercial arrangement between Apple and
Google for a decade.
I think Apple is just concerned about their
pockets and the amount of money.
So in addition to prohibiting deals between Apple
and Google, the United States Department of Justice
also has more extreme remedies in mind, including
forcing Google to sell its Chrome browser and
(38:46):
uncoupling Android from other devices like Google Search
and Google Play Store.
Google has a lot to defend against and
will prioritize Chrome over its deal with Apple.
So that's a very interesting thing that they
would rather lose their deal with Apple than
lose Chrome.
So that just shows you a friend's kind
of finished last here.
It's definitely all about business.
(39:08):
When initially asking to take the larger role
in the case, Apple said that Google can
no longer adequately represent Apple's interests because of
the wide scope of the case.
Unsurprisingly, the DOJ does not want Apple involved
in the remedies portion of the trial, which
is set to start in April.
If the court decides that Google can't pay
Apple to be the default search engine on
(39:30):
Safari, Apple would still have to offer Google
Search as an option in some capacity, but
would not be able to continue to collect
money for doing so.
So it's very interesting that they did this.
And I suspect that if I had to
guess, that Google probably paid Verizon to make
(39:54):
their search engine default.
You know, Verizon had stated not too long
ago, their executive testified that Google Search always
was pre-installed on the phones.
So, you know, we didn't talk about anything
specifically, but, you know, a 28-year Verizon
(40:16):
veteran who was on a team from 2017
to 2023, understood that struck deals with Google
to pick software to preload on the carrier's
phone, testified to federal court in Washington.
Quote, the best of my knowledge, I believe
it is pre-installed all the time.
Of course he knows that.
The government argues Google's 10 billion in payments
annually to mobile carriers and others help the
(40:40):
California-based tech giant win powerful default positions
on smartphones and elsewhere, which fed data into
the other lucrative parts of its business such
as online advertising.
I've always known Google to be no good.
And what clinched the deal was when they
had the arrogant people.
They didn't know how to spell a town
right.
(41:00):
They were arrogant.
They didn't do anything right.
And then they just keep asking for more
money.
So I'm sorry, Google, like I wouldn't spend
a penny on your search because it just,
it's not worth it.
It's better to just use you organically and
come up for the reasons that you should
come up, which is for high quality content.
(41:21):
Well, all right.
And here's one that I think is going
to open up a lot of people's eyes.
Character AI claims the first amendment protection in
motion to dismiss.
Character AI is a chat bot platform and
they filed a motion to dismiss a lawsuit
filed by Megan Garcia, whose son Sewell Setzer
(41:41):
III died after becoming emotionally attached to a
chat bot named Danny.
Garcia claims the platform's technology led to her
son's suicide.
In response, Character AI argues that it is
protected by the first amendment as the speech
generated by its chat bots is similar to
other forms of expressive media like video game
(42:01):
interactions.
The platform's counsel suggests Garcia's real goal is
to prompt regulations that would limit chat bot
capabilities.
Character AI has faced multiple lawsuits over the
potential harm of its technology, including issues related
to minors and unsafe content.
Despite these challenges, the company has implemented safety
measures to improve the user experience.
(42:24):
So this is a good place for me
to start right now and it's probably a
really good one is, so what safety measures,
which is a really good question, has Character
AI implemented?
And I think that's probably a really good
one.
So they claim that they prioritize teen safety.
(42:47):
They're collaborating with several teen online safety experts
to ensure that the under 18 experience is
designed with safety as a top priority.
And the experts include Connect Safety, an organization
with nearly 20 years of experience educating people
on this.
But there are a lot of new safety
rules that are gonna be implemented and have
(43:08):
already been.
So one of them is character moderation.
So they conduct proactive detection and moderation of
user-created characters, including using industry standards and
custom block lists that are regularly updated.
They proactively, in response to user reports, remove
characters that violate their terms of service.
(43:29):
They also adhere to the DMCA requirements and
take swift action to remove reported characters that
violate copyright laws or other policies.
Users may notice that they have also recently
removed a group of characters that had been
flagged as violative.
And these will be added to their custom
block list moving forward.
And it means that users also won't have
(43:51):
access to their chat history with the characters
in question.
So it sounds like they're trying to monitor
things, but that might not be enough.
But I do think it is at least
a good start.
All right, and Fitbit.
Yes, Fitbit was hit with a $12 million
(44:14):
fine over Ionic smartwatch burn injuries.
Ouch, I wouldn't wanna be wearing that.
And I had a Fitbit.
Thankfully, it did not burn me.
I just used it for a few weeks
and I realized that I didn't wanna be
married to something on my wrist like this.
So Fitbit has agreed to pay 12.25
million penalty after selling a big settlement with
(44:36):
the United States Consumer Product Safety Commission, the
CPSC, over defects in its Ionic smartwatch that
caused burns to users.
Now the issue which dated back to 2018,
continued into 2020 and involved the smartwatch's overheating
of a battery, which led to 115 reported
incidents, including 78 case of burns.
(44:56):
With some severe injuries, Fitbit had issued a
firmware update in 2020, but as you suspected,
it did not fully resolve the problem.
The company also failed to immediately report the
hazard to the CPSC.
As part of the settlement, Fitbit must submit
annual safety audits and ensure enhanced compliance with
the Consumer Product Safety Act.
(45:17):
Now I'll tell you what I tell everyone.
If they were not guilty, they would not
have agreed to pay the fine.
And so the fact that they have agreed
on the settlement of 12.25 million penalty
means they know they were guilty.
And Bedrock Energy targets geothermal for cooler data
centers.
(45:37):
Bedrock Energy is leveraging geothermal energy to cool
data centers and improve the comfort of commercial
office spaces by drilling deeper than traditional geothermal
wells.
The company minimizes the land footprint required for
installations, making it ideal for urban locations.
Their installations, including places in Austin, Texas, and
(45:58):
Utah are expected to become profitable in the
next year.
Geothermal cooling offers higher efficiency than water and
air, and particularly during hot and humid conditions
and could be an ideal solution when paired
with solar farms.
Recently, Bedrock raised 12 million in Series A
funding to further expand its operations.
So I think this might be a good
(46:19):
idea for more sustainable types of energy.
The question is, is there any harm with
this that we're not aware of?
A lot of times we hear about, you
know, the benefits of something being really positive,
but they don't tell us about some of
the cons.
And that's the thing I always realize is
that when you hear these benefits, you wanna
(46:41):
see are there any, let's say, disadvantages.
And usually you'll learn about them, but they
don't come out overnight.
It's usually after a problem, a lawsuit, or
some frustration that people learn about them.
And I should say, next to last time,
(47:02):
we have a few more to go here.
Our next topic I wanna talk about is
that ChatGPT suffered a major, major outage on
January 23rd.
So ChatGPT faced a major outage affecting users
worldwide.
What was the cause?
(47:22):
And how did OpenAI resolve the situation?
Well, this effect, the AI reliability.
And I think the question with all this
comes down to a few things.
So the effect it acts as the platform
on the web and the other services, OpenAI
acknowledged the issue and identified the cause quickly
working to implement a fix by 7.09
AM.
The company confirmed that the problem was resolved
(47:44):
and was monitoring the situation.
The outage also impacted OpenAI's API, but was
also resolved.
This is not the first time ChatGPT has
faced service disruptions with a similar issue occurring
recently last year, in December, 2024, due to
bugs in the telemetry service.
So I think right now, they're trying to
(48:05):
give away so much free AI.
And the problem that I see with that
is that it's not always working in the
best manner possible.
So the fact that they're making AI free
for so many people is great.
But I think that this free AI is
not really living up to what it should
(48:26):
be living up to.
That's probably the biggest thing I wanna share
with all of you is that it has
to live up to a standard.
And I just feel right now that AI
is less reliable than it was a year
ago.
(48:46):
AI 4.0, I feel is worse than
3.5. Why?
ChatGPT is better than GPT-4.
ChatGPT-4 is slow, it's buggy.
And people say it's useless.
(49:07):
And to charge $20 a month for a
service that doesn't even work, I got a
problem with that.
GPT-4 is getting worse and worse with
every single update.
So they keep making it dumber is what
a lot of people say.
But this last one that they made is
(49:28):
really pointless.
The answers aren't as intelligent as they were.
The code has more problems and its ability
to remember the conversation is like gone.
So you have to keep repeating facts over
and over again.
It's a shame to see something that had
so much potential like really drop.
A lot of people said the same thing.
I've noticed that it's just not the ChatGPT
(49:50):
we were used to.
Now, did they do this on purpose?
Did they do this because they're waiting?
Because they're holding back on us?
I don't know.
But I'll tell you one thing.
I don't trust a lot that goes on
in the AI industry because everything in the
AI industry is about money.
So the question might be asked is, so
why?
And this is my question.
Why did OpenAI make ChatGPT 4.0 so
(50:14):
bad?
A lot of people have some theories on
this.
We know from the API that the new
tools in GPT will be based on the
32,000 context module rather than the 8
,000 context model.
But it means it will have four times
(50:36):
the memory or eight times for vanilla GPT
since that one is limited to 4K, whereas
code interpreter and plugins are 8,000.
Since they're probably looking to release it in
some fashion, many people think that they tweak
the system prompt to more strongly prevent TOS
breaking content in terms of service.
And they made it longer.
(50:57):
But each token in the system prompt takes
up space and memory, which with current GPT
4.0 only has 8,000 versus the
4,000 tokens worth.
So I feel right now the quality has
gone down.
The question is, will it ever get better?
Of course, these things do seem subjective.
(51:18):
But you don't have to take my opinion.
You could go right on to ChatGPT.
You go on to some other ones like
deepai.org and stuff.
And they all seem to be able to
mess stuff up.
And so what I mean by that is,
and this is a real interesting one.
Why is it that ChatGPT can't do math
(51:38):
problems correctly anymore?
It's an interesting thing.
I almost think that they did this on
purpose.
I really do.
(52:01):
A lot of people are saying, why is
Chat so bad at math?
The thing about ChatGPT as a calculator is
it has a very really advanced case of
dyslexia.
And the chatbot is bad at math and
it's not unique among AI in the regard.
(52:22):
So Anthropic Claude can't solve basic word problems.
Gemini fails to understand quadratic equations.
And Meta's LaLamba struggles with straightforward addition.
So how is it that these bots can
write soliloquies yet get tripped up by grade
school level math?
It has something to do with the tokenization,
the process of dividing data up into chunks.
(52:45):
For example, the word fantastic in the syllables,
fantastic and tick, helps AI densely encode information.
But because tokenizers, the AI models, do the
tokenizing, they don't really know what numbers are.
They frequently end up destroying the relationships between
digits.
For example, a tokenizer might treat the number
380 as one token but represent it as
(53:06):
381 as a pair of digits 38 and
1.
But tokenization isn't the only reason math's a
weak spot for AI.
AI systems are statistical machines trained on a
lot of examples they learn.
The patterns in those examples to make predictions
like the phrase to whom in an email,
et cetera.
For instance, given the multiplication problem and using
(53:28):
ChatGPT, having seen a lot of multiplication problems
will likely infer the product of a number
encoding in something like a 7, a 2,
or 4, but that will probably not be
the answer.
So GPT-40 struggles with multi-digit multiplication.
I think that is a big problem.
They say it's hopeful that it might get
(53:48):
better, but I don't really believe it's going
to happen anytime soon.
And that's just the honest truth.
I think they've got to do a lot
better.
And you might be saying, how do we
make ChatGPT better at math?
(54:11):
Not easily.
The thing is when you start asking complex
questions, it gets things wrong.
Is it a prompt?
Not really.
It's a language model.
It's not a calculator.
It's not really designed for math or for
logic.
(54:33):
So it can do a lot of math,
but it can't do a lot of the
common math or the math that we've gone
to school for, even the higher level math.
It just hasn't been trained enough.
So one person said he has a solution.
(54:53):
He asked it to write a Python code
to solve the question.
It usually imports different things and solves it.
It runs the same on Python to get
the result.
At the end of the day, it's a
language model, and it's a good one.
To do math, you better seek something else
or use a different strategy.
Because at the end of the day, ladies
and gentlemen, I mean, you have to realize
that it is a language system, right?
(55:16):
And so the way we think of numbers,
the way we think of just subtracting, it
can't seem to figure this stuff out.
So the more we get immersed in tech,
the more we start realizing how AI can
actually, let's say, enhance our lives, we have
(55:36):
to also be understanding that AI can also
hurt our lives.
So we have to have a delicate balance
of what is AI, when can we use
AI, using it as a guide, but don't
use it as your be-all, end-all.
Because if you do, you're gonna be very,
very disappointed.
And I think that's the thing that a
(55:59):
lot of people don't realize is that AI
has potentials, but for right now, I feel
that it's not being trained in what it
should be getting trained.
It's getting trained in a lot of nonsense,
if you ask me.
(56:20):
And it's just not getting used to doing
the simple, simple things.
So, I don't know.
I think it's going to have to go
through some more revisions, and something that a
(56:40):
lot of people might not really understand, but
it's gonna have to happen.
And so you can't just think that everything
AI does is right.
You have to also understand that what we
think of as an idiot what we think
(57:01):
of in one language, if you ask it
to translate, of course, that's gonna get wrong
when you go from a language to language,
right?
But it can't even understand what we do
every day, and then how to parse that.
So, I think it's got a long way
to go.
And you might have a good question for
me, and that is, how long till chat
(57:23):
GPT can do good math?
Um, a while.
It can explain concepts, but it can't do
math for squat.
And so the question is, will it ever
(57:44):
be good at math?
Don't know.
It wasn't built for math.
That's the number one thing I want to
tell you.
So, I hope you guys learned a lot.
So, this is a weekly show.
We also rolled out something brand new called
Jamoar Tech Byte Blunders.
(58:05):
It's designed to make you guys laugh.
You can reach that by visiting my link
tree, or you can just go to jamoar
.com under social, and you'll find it right
there.
I think it's importantly, Sheldon, that we understand
the reason for things, all right?
Have yourself a great rest of your night,
(58:26):
and I'm gonna catch you real soon.
Be well, everyone.
Bye.