Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:07):
Hi everyone, I'm John C. Morley, the host of
The JMOR Tech Talk Show and Inspirations
(00:50):
Well, hey guys, good evening.
It is John Seymour here, serial entrepreneur, and
welcome to The JMOR Tech Talk Show.
It is great to be with you here
on the show.
And we have another great show for you
tonight.
In case you guys did not know, we
do this show every single week, but you
(01:10):
can always check it out, as well as
other previous episodes by going to believeweachieve.com.
All right, guys, so for tonight's show is
tech at a turning point from ending to
AI driven futures.
We're on series four, and this is show
number 34.
(01:32):
First thing I want to talk to you
guys about, so you guys know we use
lots of different streaming platforms and services.
So we had been with a product that
started with an S, you guys can figure
who that is.
And they were started by two friends, great
platform.
And then they actually were doing what they
call, you know, like town meetings every week,
(01:55):
like Sundays.
And it was great.
And we're with them for a very long
time.
They even had the ability so that you
could call on a telephone 24 hours a
day and reach somebody by an eight hour
number.
That was real cool.
And then what happened was the friends announced
that they're selling the platform.
But don't worry that they're still gonna stay
(02:16):
very heavily involved and that everything is gonna
be just where it is.
And they're gonna make, well, I knew that
wasn't gonna stick.
So within, I'm gonna say maybe three to
six months of that, they started to slowly
disappear.
About six months after that, I was traveling
(02:39):
and you know, I do my shows, whether
I'm at my home studio or my office
studio.
And I could not log on to this
S platform.
And so you have to use basically your
email and then they do a, you know,
a six digit OFA, a one-time code,
(03:03):
OTA.
And so we weren't getting the code.
So there was no way to reach them.
I use that little stupid messenger system, which
a lot of companies are doing because that
kind of hides them from having to get
on the phone.
That's why people do that, right?
And it's kind of like, I call it,
you know, they support whenever they want to,
right?
They don't have to support right now.
(03:24):
They support whenever they feel like it.
And so I remember during this time, I
said, well, I have to get the show
out.
And so what I decided to do was
to record the show locally.
I was in Florida and to send it
back to our studio and push it out
(03:44):
from there because I was already logged into
the studio at home and also in my
office studio.
So I pushed it out to there and
then I scheduled it for a few minutes
later.
Well, still couldn't get into this platform.
So I had to do that for a
few days.
So this is getting kind of crazy.
So that night I filed a ticket.
So that was like, I think that was
(04:06):
Thursday night.
So Friday, Saturday, Sunday, Monday, Tuesday, they got
back to me.
Now, granted it was a holiday weekend.
Okay.
But you still should have support around the
clock, especially for us professional streamers that do
this all the time.
And so nobody got back to me.
(04:27):
I got an email just telling me I
can update my email address and that was
it.
It was just useless.
So they were just not helping me at
all.
And I couldn't talk to them by phone.
There was no chat.
It was like terrible.
So I tried another platform.
Platform starts with an R.
Platform we're using right now.
We might be leaving soon.
(04:47):
The reason is, so we use professional equipment
like Focusrite and Shure microphones and things like
that.
So we use high quality equipment.
And so about, I don't know, maybe about
four or five days ago, we started noticing
that every time we were going online within
(05:07):
about 50 or 20 minutes, we would get
a message saying that the audio device had
disconnected and is restarting.
Now, granted, this didn't happen to any other
streaming software, any other meeting software, Zoom, et
cetera, didn't happen anywhere else.
So I reached out to their team and
they were useless.
I put a ticket in this past week
(05:29):
when we noticed it around Sunday night and
all they just kept doing was pretty much
prevaricating and just appeasing me the whole week.
Tonight, I got a lady that I spoke
to via chat, same person, and I asked
her qualifications and she told me she doesn't
feel comfortable sharing.
I said, well, then you're obviously not an
(05:49):
engineer.
So don't call yourself an engineer if you're
not.
And then they told me they're going to
have their engineer get back to me and
now they're blaming it on my equipment.
Okay.
So we now have a call set up
with Focusrite just to prove them wrong.
And we're probably going to be leaving the
platform and moving to like R or something,
which is another platform.
(06:10):
And if that doesn't work, we'll probably go
to a professional stream platform, which is like
just used for private type broadcasting.
So a little frustrating this week, you know,
really want to put high quality content out,
but some of these streaming companies, unfortunately, you
know, they take your money for the year.
And then the support they provide is like,
well, kind of non-existent.
(06:32):
But again, we have a great show for
you tonight.
I do invite you to check out believemeachieved
.com.
And in case you are new here, I
do want to take this opportunity to welcome
you.
Yes.
Welcome you to The JMOR Tech Talk Show
Show.
If you are coming back, well, welcome back.
It is always great to have you back
here on The JMOR Tech Talk Show.
I am John C.
Morley, serial entrepreneur, podcast host, podcast coach, engineer,
(06:56):
marketing specialist, video producer, and passionate lifelong learner,
and also a graduate student that is pursuing
his MSCSAI, and then will be pursuing a
PhD in that field.
So I hope you do enjoy the show
as well as all the other shows.
But something very interesting happened.
(07:17):
It was back just about a week or
so ago.
And maybe some of you have been around
for a while.
Remember the sound, you've got mail.
And in fact, sometimes you would set this
up.
And for a long time, you'd get a
message saying that it couldn't connect.
There were things like Winsocker.
(07:38):
We don't have that anymore.
And so this is when we had dial
up because the modem would have to dial
out, then it would have to connect.
And sometimes with the dialing up and hanging
up, well, there was a memory leak between
the Winsock and the modem.
It caused a big problem.
Their solution was reboot the computer, and then
you could probably dial up four more times.
(07:59):
Well, I would be dialing up AOL maybe,
I don't know, like in the morning, I
would dial up before I went to school.
I would have a dial up in the
afternoon, sometime mid-afternoon.
I have a dial up in the end
of the day.
So remember, we weren't online live.
So I wanted to get the information as
quickly as possible.
And it wasn't as easy as just going
to your computer and getting it.
(08:19):
So I wanted to have information there.
So then when I responded to an email,
when I clicked it later on, it would
just go out with that batch.
But imagine this, ladies and gentlemen.
So Verizon bought AOL.
And AOL has decided to finally, well actually
Verizon, has decided to end the dial up
error.
After decades of connecting the world, AOL has
(08:41):
officially announced the end of its iconic dial
up internet.
The familiar, you've got mail, sound will still
be there and fade into the history, marking
the close of what I call an error
in digital communication.
While AOL mail and apps will continue, nostalgia
and the evolution of internet collide in this
major tech milestone.
(09:03):
And if you've used AOL, you probably can
have a little bit of a passion for
this, or maybe you used CompuServe, which was
a similar service.
And the whole idea was that you could
go online and use their, well, their special
space, proprietary, where you could access all their
information, you'd get their advertisers.
And it was a very simple way to
(09:24):
send and receive mail because you just clicked
on write.
You clicked on send, and then you had
your own little address book.
And so they've announced, ladies and gentlemen, that
officially, as you've heard it here first, on
September 30th, 2025, September 30th, that is a
little bit over a month to go from
(09:45):
now, they will officially discontinue the AOL dial
up.
I know that's a hard thing for everybody
to swallow, but it's coming.
Robots and timber homes are making a comeback
with Britain's housing crisis as it might finally
meet its match as the robots meet sustainable
(10:07):
timber.
These automated builds promise faster, greener, and cheaper
housing solutions without sacrificing quality.
Could this combination revolutionize construction as we all
know it, or not?
So a lot of people are very enthused
about this moment, the fact with robotics, sustainable
(10:29):
timber, and the whole thing of trying to
preserve that carbon footprint, or so people say.
But a lot of times it just becomes
nonsense because if they preserve it one place,
they're just going to burn it off somewhere
else.
And Meta, yes, Meta makes 2FA mandatory on
Instagram accounts that are, let's say, high profile.
(10:55):
A lot of my Instagram accounts, my John
C.
Morley serial entrepreneur accounts, they required me not
too long ago to go in and actually
set up 2FA, or I would not be
able to log into those accounts and post.
So they're doing this as a step of
the security.
Meta now requires the two-factor authentication for
what we call high-risk accounts, have several
(11:16):
thousand or more interactions to protect creators and
influencers from growing cyber threats.
So digital safety is becoming the new standard
in social media.
But is this more of an annoyance?
So as of August 17th, which was just
a few days ago, Meta has decided to
(11:37):
step up security by increasing 2FA.
So what is 2FA?
Two-factor authentication.
So that basically means that when you put
your password in, it's going to do basically
a one-time password.
So an OTA would be sent to either
your phone via SMS or text, or it
(11:59):
would go to the Microsoft Authenticator app or
some other Authenticator app.
You'd have to put that code in, which
those codes change like every 60 seconds roughly
in those Authenticator apps.
And the code they issue is good for
about maybe one to three minutes.
So this whole thing is to make it
so that if you're trying to log in,
(12:20):
in fact, Slack has always had 2FA and
we use it all the time, but some
people have just decided not to use 2FA.
Why?
I don't know.
And so their accounts have gotten hacked by
Disney, et cetera, right?
And so by enabling 2FA, if somebody was
to hypothetically get my email address or even
get my workspace or even get my password,
(12:44):
they would not be able to log in
unless they had the two-factor code.
Pretty cool.
So I'm glad that Meta is doing it.
However, I think they should have given people
a little more warning.
And when Meta decides to do something, they
like just do it like cold turkey.
They don't give people a warning, right?
Even when they decided to delete the content,
(13:05):
when you upload like streams and stuff, they
just give you a warning.
They didn't give us much warning.
I think maybe got a few weeks or
a month warning.
It's just, they just do whatever they want.
So I don't know, guys.
I like that they're doing it.
I don't like in the manner in which
they've decided to execute it.
And Biopatch protects farm workers now and innovative
(13:27):
wearable technology is apparently saving lives in fields
around the world.
The tiny Biopatch monitors vital signs like heart
rate and skin temperature, helping prevent, well, the
heat-related illnesses among farm workers and demonstrating
how tech can protect our human health.
And I think that's an important use of
(13:47):
technology to keep us healthy, right?
So this Biopatch could save farm workers from
heat exhaustion and other types of diseases from
that.
Just wearing this Biopatch, monitoring all their vital
signs, a tiny device that could be lifesaver
in the fields, keeping workers safe in the
(14:09):
extreme heat conditions.
So that's a pretty cool thing.
And Tunisia's AI prosthetics.
Well, Turing Bionics is changing the game with
AI-powered prosthetics that mimic natural movement.
Lightweight, 3D-printed and non-invasive they are
(14:31):
advanced limbs that make mobility more accessible and
redefine what's possible for amputees.
So I think that's a cool use of
technology and hopefully that'll help a lot of
people that were not able to walk.
So this Tunisian startup, Turing Bionics, is claiming
(14:52):
to revolutionize prosthetics with AI.
And because of that and 3D printing and
muscle sensors, this lightweight, non-invasive limb moves
like the real thing, which is going to
be a big advancement for prosthetics.
And it's Trump now versus the Intel CEO
and the president there now calling for Intel
(15:19):
CEO Lip Bhutan to step down over alleged
conflicts tied to China.
This high profile move raises questions about corporate
governance, geopolitics, and the future direction of one
of the American's major chip makers.
Is this ethical though?
Everybody's been asking because we posted the little
(15:39):
teasers during the week and people saying this
can't be legal.
And I'm like, I don't think it can
be either guys.
I really don't think it's legal.
So President Trump demanded the Intel CEO Lip
Bhutan step down.
He demanded, he didn't ask, he demanded, calling
him highly conflicted over his ties to Chinese
firms.
So Mr. Trump, I see that you're saying
(16:02):
that, but if we were to, let's just
say, look at the other side of the
coin, would we find you're 100% clean?
I don't know.
Just a question to ask.
A wise person once told me, John, you
know, don't throw rocks because your friends live
in, or your strangers live in glass houses.
(16:22):
Remember, don't throw them because they will retaliate
back and throw them at you because you
also live in a stone house.
So the America for the chip maker and
its turnaround plans, still a little bit of
a mystery.
We'll have to see what's going to go
on with that juncture.
And Instagram has a little bit of a
glitch.
(16:43):
A little, that's to say the least.
Instagram AI ban glitches of social media chaos
hits Instagram.
AI mistakenly bans many accounts accusing users of
violating things that they didn't do.
Suspensions, reasons, and confusion highlight the challenges of
AI moderation in today's digital landscape.
(17:07):
So there's a lot of angry that are
confused, that are worried.
And the real story behind this Instagram ban,
Instagram users worldwide are facing wrong bans accused
by Meta's AI of what they call, let's
say child inappropriate activity by adults.
The accounts are basically being suspended.
Some are reinstated and sometimes banned again.
(17:30):
People are very angry about this, even worrying
police might get involved.
But again, you know, when we use AI
to make a decision, right, we have to
make sure that there are humans always in
the loop.
So we'll keep an eye on that guys.
And the United Kingdom, well, is coming to
(17:51):
bat with facial recognition vans.
Police vans equipped with live facial recognition technology
are rolling out across England as we speak.
While intended to catch criminals, critics are warning
about privacy and surveillance and sparking debates over
the balance between safety and our own civil
(18:12):
liberties.
So with the UK and this massive rollout,
more police vans with live facial recognition scanning
crowds to catch violent criminals and people that
are not, let's say, doing things properly as
adults.
Critics are warning it's a step towards surveillance
state, while the government says it's safe, targeted,
(18:36):
and lawful.
I don't know guys, I think that might
be a little bit of a stretch to
just put that on everyone.
I think that's a big, big no-no.
And AI, ladies and gentlemen, reunites Chile's lost
kids.
Decades after Chile's dictatorship tore families apart, AI
(18:56):
and other high-tech tools are helping reconnect
missing children with their relatives.
These innovations provide hope and healing, showing technology's
power for social good.
And I think that's a very, very big,
big issue out there.
So the military dictatorship started around 1973, went
(19:19):
to 1990, and thousands of children were taken,
displaced abroad, or just went missing, leaving families
searching for answers decades later.
While traditional DNA testing has provided some breakthroughs,
it also faces some accuracy bumps and accessibility
challenges.
Now organizations in Chile are turning to innovative
(19:40):
technologies such as artificial intelligence to trace missing
relatives, verify identities, and reconnect families torn apart
by the regime.
These tools are giving new hope to survivors
through processor means, are getting complex and emotionally
charged.
So this is not a very easy thing.
(20:02):
And it looks like Stripe has bit off
a little more than they can chew.
Stripe's LGBTQ plus payment error, Stripe allegedly, well
they claim they do, apologizes, they had to
apologize, after the staff mistakenly blocked payments for
LGBTQ plus services.
(20:23):
The correction underscores the importance of clear policies,
corporate accountability, and ethical practices in the digital
payment space.
So what actually was going on and why
did they do this?
So here's the thing, they have no problem
allowing LGBTQ plus businesses to sell their different
(20:44):
types of products and services.
What they have an issue with is those
using systems to sell adult related non-PG
content and services to individuals.
So that's a pretty big jump and none
of these companies were doing that.
So that's a pretty big stretch.
(21:08):
I mean, I think maybe they just came
up with that because they realized how stupid
it was.
I don't know.
I think that's just them talking with, well
basically their head not screwed on right.
And UK adult sites are dropping age verification
rules in the UK, causing a sharp decline
(21:29):
in adult website traffic.
While designed to protect minors, the regulations are
reshaping online content consumption and sparking debates over
privacy and digital freedom.
Some more challenges right.
So the UK's new age verification rules are
now in effect and like I said, they
(21:49):
really caused a sharp decline in that traffic.
And the regulations require users to verify their
age before accessing these types of sites with
this type of content, meaning millions of visitors
must now use third party verification services.
Many sites that fail to implement these checks
have been blocked or restricted.
(22:11):
While others are struggling to adjust to the
requirements, industry experts warn that the changes could
reshape online adult content consumption in the United
Kingdom, as well as casual browsing without verification
is no longer possible.
The move aims to protect minors from accessing
explicit material, but it has sparked debate over
privacy, digital rights, and the broader impact on
(22:33):
internet freedom.
You know, it's something, once something has an
issue, we come up with a resolution because
we want to cover all cases, but that
might not be the most ethical thing to
do.
But sometimes people don't really understand that very
well.
And ladies and gentlemen, Rolls-Royce bets on
(22:55):
AI big time.
Rolls-Royce aims to become the United Kingdom's
biggest company by combining AI with small modular
nuclear reactors.
Wow.
The move highlights innovation in energy, aerospace, and
industrial strategy, showcasing how AI can power economic
growth.
(23:16):
That's a pretty amazing thing.
So Rolls-Royce believes that AI is going
to be, well, their next main claim to
fame.
And imagine this, powered by an SMR, small
modular nuclear reactor, possibly making it the UK's
most valuable company.
(23:37):
The firm has signed deals to supply the
SMRs to the UK and government's aiming to
meet a projected global demand of foreign units
by 2050, each costing up to three billion.
CEO Tufan Ertugruluk highlights the company's unique nuclear
capabilities and sees AI-driven growth as a
(23:59):
major opportunity for UK industrial strategy.
Alongside the nuclear ambitions, Rolls-Royce continues to
dominate aircraft engines and it's targeting the next
generation narrow-bodied aircraft market.
Since the head of their government took over
in 2023, the company's shares have risen tenfold.
(24:20):
Profits are expected to surpass three billion pounds
and debt has fallen, setting the stage for
what he believes could be historic rise to
the top of the UK's market.
Wow, guys.
That Rolls-Royce at a million years, it's
definitely an eye-opener.
(24:40):
I definitely have to say that.
And so, Starbucks, Korea bans PCs.
What's this all about?
So, Starbucks Korea is limiting bulky devices in
cafes to maintain shared space and comfort.
This move addresses the rise of, what they
call, customers turning cafe into their own personal
(25:02):
office spaces.
That's just so bizarre.
So, again, Starbucks Korea is cracking down on
customers turning cafes into their full-blown offices,
banning things like desktop computers, full-blown printers,
bulky items.
They take up lots of space.
(25:23):
Laptops, tablets, and phones, well, they're still welcome.
But the move aims to keep seating available
and maintain a pleasant shared environment, as the
trend, techonic people working long hours in cafes
sparks debates online.
So, even if you do have a laptop
and you're the kind of person that has
all these extra accessories, you're kind of the
(25:45):
one that's going to probably get kicked out.
So, if you need more than, you know,
a little space for your laptop, like you
need a whole extra table or something, that
might be a problem.
So, we'll have to see what's going on
there.
And ladies and gentlemen, the Jersey police are
now using artificial intelligence technology in helping Jersey
(26:05):
police boost efficiency, transcribe interviews, and tackle rising
youth crimes.
By combining tech with human oversight, law enforcement
can now respond faster while maintaining accountability.
Kind of crazy.
(26:26):
So, with the Jersey police turning to AI
to hopefully crack down on these crimes and
using AI to fight them, like I said,
being able to transcribe witnesses, interviews, and keeping
human oversight, this technology is really at the
(26:47):
cutting edge to boost efficiency and decision-making
as the AI sees rising missing persons and
vehicle incidents, highlighting the need for more resources
and tech-driven solutions.
The concern is, if they do this kind
of stuff, what is going to be their
(27:07):
motus for protection, safety, and security?
I think that's a big concern, guys, for
a lot of people, like, you know, what
do you do for safety?
And I know that's something that a lot
of people are saying to me, John, you
know, I don't know.
But if you put information in somewhere and
you don't have a plan to manage it,
well, that could be a very, very big
(27:28):
problem.
And I don't know if you guys know
this, but I think it's a real interesting
thing.
Waymo gets the first permit to test autonomous
vehicles in New York City.
This just happened a few days ago.
They received their first permit to begin testing
its autonomous vehicles in New York City with
a trained specialist behind the wheel helping the
(27:49):
company advance its self-driving ambitions.
Wow.
That's a very, very big step.
And I think a lot of people are
concerned about not just AI, but where's all
this information going, right?
I think that could be a big problem.
(28:12):
And now another interesting thing that's hitting the
market, Amazon lobbies Indian government to exempt export
from foreign investment rules, the sources are now
saying.
And so everything happening is all about the
money.
It's all about, you know, how much is
it going to impact people?
And I think as it impacts people, then
people are like, oh, we got to do
something, right?
We got to do something.
(28:33):
And the more people see this, they're like,
oh my gosh, what are we going to
do?
And so these are some very tough questions
and we don't have the answers for these
just yet.
Thinking about the stuff that they're working on
and thinking about all these great things, building
(28:55):
different data centers that are larger are great,
right?
But my question is, where is this data
going?
Where is the policy for the security of
this data?
And I don't know if you guys know
this, but the Australia watchdog orders the Beyonce
(29:18):
unit to conduct an audit over some money
laundering concerns.
So just because something gets resolved once doesn't
mean that somebody's in the clear.
So again, I always say to, you know,
do the right thing, right?
Do the right thing all the time, even
when no one is watching.
(29:40):
Why?
Because it's just better.
You're going to have headaches.
Plus you're going to want to sleep well
and you're going to want to be a
person that doesn't have to worry about what
you said last week, right?
I would think everybody here wants to be
somebody that is bound to their own integrity,
to other people's integrity.
(30:00):
But I know that sounds like something that
is like complicated, right?
I know right now the big thing coming
up is TikTok, right?
And suppose in a week or two, Trump
is going to let us know who he
got to buy TikTok.
I think it's probably going to be like
Meta and a few other companies.
It's called M2 or M3.
So it already has like Meta's name in
(30:23):
it, if I'm guessing, you know what I'm
saying?
Like probably like Meta and maybe two other
companies.
That's my guess.
I don't know yet, but we'll have to
see.
And I think the more that we can
see what's going on, the more it's going
to impact our economy, the more it's going
to affect local businesses.
It's going to affect the culture, the political
(30:44):
culture and the geotechnical culture.
So a lot of this stuff out there
seems to be interesting.
But when people develop different products or different
services, are they doing things for the greater
good of all?
And you've heard me say this before, okay?
(31:06):
You know, I think most people don't understand
that something can't just be profitable it has
to be for the greater good of all
concerned, right?
And I know that sounds like something, you
know, you probably don't want to hear, but
it's the truth, okay?
(31:26):
It is actually the truth.
And so the more we think about different
technical challenges and technical problems, the more we
ask ourselves questions like now, ChatGPT is version
five, I believe.
And, you know, Microsoft Copilot stuff.
(31:47):
Now they're able to even take your voice
as input.
But again, the challenge still remains that as
they're developing this technology, okay, it's not perfect.
It's far from perfect.
It makes mistakes.
I was using a piece of AI technology
the other day, and it kept telling me
(32:09):
that my browser wasn't supporting something.
When my was supporting something, it was just,
they had a bug.
And here is something very, very interesting.
I think this is really cool.
Did you know that the White House is
now opening their first TikTok account?
(32:32):
That's wacky, right?
I mean, TikTok is supposed to be banned
in the United States, but yet it's funny
how Donald Trump is opening an official TikTok
account.
I mean, he loves TikTok.
He claims that TikTok helped him win the
election or what have you.
So pretty soon we're going to start to
(32:52):
see other departments of, let's say, the government
open these accounts.
It's crazy.
He actually made a very interesting statement.
It says, I am your voice.
Trump relaunches on TikTok with the White House
account.
(33:13):
He threatened to ban TikTok in his first
term, but has embraced it.
Why?
Because that's what got him in office.
So there's going to be a lot of
rivals back and forth of what is the
government doing?
What's happening with TikTok?
I think in the long run, the United
States is going to do well with it.
(33:34):
But what you don't know about the new
TikTok is that it's probably going to block
us from access to other countries, which isn't
a big deal for me.
But it's also going to give our United
States government access to all the content we
post.
I don't really care so much.
But I think maybe the rules of TikTok
will change.
Maybe they'll be a little fairer, all right?
(33:57):
And I know that it comes down to
the fact that people are about doing the
right thing.
But sometimes, when a push comes to shove
like TikTok, and you have to admit, the
stance that President Trump is taking, it's interesting,
(34:20):
to say the least.
TikTok briefly let app stores in the United
States before Trump's second and went dark 14
hours.
You all remember that, right?
A pop-up message, cresting appearing when the
app started working again, reading, as a result
of President Trump's efforts, TikTok is back in
(34:40):
the US.
In fact, TikTok CEO, Xiaozhi Qiu, was among
many of the tech leaders who Trump invited
to his inauguration.
So again, I'm not really sure what he's
doing with it.
(35:01):
I'm not sure if TikTok is going to
be for the greater good of everyone.
I don't know yet.
I know that reaching them for a simple
problem, they keep changing their mind about rules.
And then when you try to reach them,
I don't know about you, but you have
to go on their app.
And then if you email them, they tell
you, oh, can you please use the app?
So you have to go to a page,
fill something out.
I mean, it's just, it's absolutely, absolutely nuts.
(35:23):
And it looks like tonight, if you guys
have noticed, we have not locked out, which
has been great.
Apparently, Restream has been having issues with my
professional mics, like Focusrite and things like that.
So they're having a conflict.
This just started a couple days ago.
So I've had to do broadcasting off of
(35:44):
my regular mic, which is not my professional
mic, which doesn't have the same high fidelity.
But I think, you know, when a company
offers a product or a service, I don't
care whether it's a light bulb, it's music,
it's a medical service, it's a technical service,
right?
Or an educational service.
I think that company has a responsibility to
(36:08):
make sure the client is satisfied, not just
today and not just tomorrow, but for the
rest of the time to use the product.
The company that I work with, I'm not
going to mention their name, there's a company
I work with.
And they gave a free trial on something
a while back.
And I needed more time.
So they gave me more of a trial.
Then another time, they wound up giving me
so much of a trial, it was like
(36:29):
over a month.
But then you know what happened?
They took my feedback, they implemented it.
And I wound up becoming a customer.
Well, because a company that used to work
with decided it was okay to take our
price tag, okay.
And let's just say the price tag, I'm
going to say per user was something like
$60 a user.
(36:50):
Let's just say they said, well, we're going
to go ahead now and raise that price.
So now it's how much?
$600 a user, 10 times more.
I mean, that's pathetic.
And the thing is, when people use AI,
(37:12):
I was talking to somebody before about this,
and they use AI and they use it
to exploit us.
See, that's using AI as a tool that's
bad.
I always told you AI and weapons, they're
not good, they're not bad.
They're tools.
It's how we use them that makes them
good or makes them bad.
I know that as a lot of you
hear about AI, you're like, what is AI?
(37:34):
What's AI going to do?
Well, AI is, think of it like this,
an operating system.
Think of it like this.
And it's able to take different inputs.
It's able to learn.
And what it's also able to do is
it's able to do some analysis.
(37:54):
And the analysis is done by what we
call LLMs. And what you might be asking,
John, what is an LLM?
So it's a large language model.
It's a type of artificial intelligence that uses
what they call neural network architecture.
And it's basically called a transformer.
(38:16):
And it's trained on vast amounts of text
data to understand, generate, and predict human-like
language.
I said to predict like, I didn't say
that it was exactly like it.
These models learn patterns.
And I told you I was going to
bring a chat back onto the show.
I think I'm going to do that actually
tonight.
(38:37):
I know a lot of you've been asking
me.
And I'm actually going to ask chat some
things.
Hopefully, chat does not mind.
I'm going to use chat GPT-5.
And let me go ahead and pop that
on.
For those of you that are watching, you're
probably going to see that on the screen
here in just a second.
But I'm still going to read the responses.
So you'll definitely get that.
(38:57):
So first thing I'm going to ask chat
GPT right now is I'm going to ask,
what is new in chat GPT-5?
And it takes a second to analyze.
And it says unified smarter model with built
-in reasoning.
OK.
So chat GPT-5 thinking activates for tasks
(39:21):
like coding, science, or financial analysis.
Users can choose a quick answer if they
prefer, or it goes to the deeper analysis.
Enhanced accuracy and safety, that's what they're claiming.
I haven't really seen that to my own
eyes yet.
Next level reasoning, it can perform at a
PhD level in areas such as math, science,
(39:42):
and coding.
I've done some discrete math with it.
And it was pretty clever.
I won't tell you it was perfect every
time.
But it was, on a scale of 1
to 10, it was probably like about an
8.
It can generate fully functional apps, games, and
websites from simple prompts.
A massive context window.
(40:03):
It can handle extraordinary long inputs up to
256,000 tokens.
So it can process entire books, detailed reports,
or lengthy chats without losing context.
But here's the thing.
In the free versions of things like chat
GPT, you can't just use that.
You just can't.
Voice and writing updates.
So if I asked it something like, why
(40:25):
are you safer?
We're just going to ask that question.
Great question.
When people say GPT-5 and chat GPT
safer, they usually mean it's less likely to
generate harmful, false, inappropriate content.
So safe completion techniques detect and block harmful
or misleading responses that could be misappropriated the
wrong way.
(40:46):
Avoid generating dangerous content like instruction logs or
malicious code, if that's what it says.
Massive safety testing.
GPT-5 was tested across 5,000 plus
hours in high-risk areas like biological sciences,
chemistry, cybersecurity, legal, and political advice.
Reduce hallucinations of false information.
(41:08):
So now it's claiming not to give so
much false info.
Stronger internal guardrails.
Recognize attempts at jailbreaking or prompt injection.
And resist them more effectively than it could
before.
Instead of blanket reactions, GPT-5 often gives
better explanations and alternatives.
This is all what they say.
(41:28):
And so smarter, recognizing risks, more cautious and
honest, better tested against harm, and along with
clearer, more helpful content.
And you can also search the web now.
So that's, yes, you can search the web
and it can find things out.
(41:50):
So those are very useful things, guys.
And I think the more that we start
to learn about AI, the more that we
have questions about AI, you're going to find
that as you charge for AI services, I
found this out when I was doing some
study for my discrete math, it's not how
many questions you ask, but it's the number
(42:11):
of tokens okay.
So I know you're probably asking me this
great question, what is a token with AI?
So a token, it's a single fundamental unit.
So it can be a word, a part
of a word, a character, or a symbol.
AI language models break down text into these
(42:33):
smaller, manageable places called kens, a process called
tokenization, allowing them to understand relationships and generate
human-like responses efficiently.
It says likely, right?
But it doesn't say that definitely does generate,
right?
So when I was doing my homework and
just doing some studying for one of my
(42:53):
tests, I was only able to generate about
four or five problems on ChatGPT.
I then went to some other engines like
Microsoft Copilot, which by the way, Microsoft Copilot,
I was very surprised with Microsoft Copilot.
There isn't a lot of restrictions on the
amount of usage you get with it, but
you do have to sign in.
It does have to know who you are,
(43:16):
so you have to have a valid all
that.
So that, I don't know what its limits
are because I haven't hit any limits, but
I know ChatGPT, I do hit its limits.
But all of them, long story short, they
want you to pay.
Okay, they do want you to pay.
And I think that's something that a lot
(43:37):
of people don't realize, but it is important
to realize that it goes by the tokens,
right?
The words.
And so when we're thinking about putting together
problems, and we're doing something like I was
doing a math choosing problem.
And so there are a lot of elements
in that problem.
So it was only able to generate only
four problems from it.
(43:57):
So you have to realize that if I
give you a scenario like, you know, Joey
went to the store and because he got
there before they closed, he was able to
get the items on sale.
Joey went to the store and because he
got there before they closed, he got the
(44:18):
items on sale.
Otherwise, if he missed it and he went
tomorrow, the items would have been regular price.
He saved 20%.
To get my idea, so each one of
those is like a logical reasoning.
And when we reason in our mind, what
you're going to find is that we don't
get the answer right away.
We first get the input.
(44:39):
And then sometimes we have to walk away
from the input like, oh, I get the
answer.
I'll give you a perfect example.
I was working on my final.
And there was one question that wasn't coming
to my mind.
And I was about ready to hand in.
And just as I was about ready to
hand it in, I got an idea.
And I said, let me try this.
Let me try this.
And at the last minute before I handed
(45:00):
in, I came up with the closed formula,
which is based on a formula that allows
you to get to whatever n is without
having to know anything more than what iteration
you want to get to without knowing the
previous iteration.
So those are pretty cool things, guys.
And so AI is a model.
That's what AI is.
It is a model.
(45:23):
And this model helps us to learn and
make, let's say, learn about some type of,
let's say, disease.
In order to set that up is extremely
expensive, right, to create that environment.
But if we can mimic this in a
virtual environment, and we can easily change things
(45:46):
like genetic factors on the person, all in
these virtual type of models, we can then
make certain deductions based on that with different
formulas and statistics, all by changing one thing.
And so AI in itself is a very
good modeling tool.
(46:08):
But what AI is not is something that
is just going to come up with a
perfect, clear answer.
Because there is no perfect, clear answer.
What I might think is perfect and what
you might think is perfect is something completely
different, right?
What would you use AI for?
If I had AI right now, would you
(46:28):
use it to help you do your homework?
Would you use it to help you learn
a concept?
Maybe you want to learn a language.
And now with chat GPT and other systems
being able to support voice as an input,
I think that's a really, really cool thing.
And now we see the Jersey police using
(46:50):
AI and industries that are using AI.
I mean, I called a library the other
day.
And I said, thank you for calling the
XYZ library.
And at first it's like, you know, if
you know your party's extension, dial it now.
Otherwise, press one and enter your library card
number.
So if you were to go forward and
(47:11):
enter your library card number, it would proceed
to tell you, you know, you currently have
four books out.
One of them is one day late or
is due tomorrow.
Would you like to renew that book?
And you could do all this automatically.
It could be used for reservation systems, but
where the problem comes in is when we're
(47:34):
not able to make those quick changes.
Like, let's say we were at a vending
machine and it says, well, what drink do
you want?
I like the half ounce bottle of Tropicana
orange juice.
And it says, okay, you want the half
ounce bottle of pineapple juice.
No, I said orange juice.
(47:55):
Got it.
I'll give you two pineapple juices.
So that's where there's some issues.
So I think we need to clarify if
it can't analyze our voice for the correct
answers, then maybe we need to have a
prompt like, you know, is this correct?
Press one for yes, press two for no.
We've seen this on IVR, interactive voice response
system.
So I think where we're trying to get
(48:17):
with artificial intelligence is a place where we
can learn, where we can use it to
gather more data so we can build more
models.
Because that's one of the things, ladies and
gentlemen, that takes so much time in our
life to build models.
And so, as you know, if you've ever
heard about how they make ships and things
like that, they don't go build a great
(48:38):
Navy ship.
They practice, they build a model or a
prototype that is a fraction of the size.
Everything else is built exactly the same way,
but it's built down maybe like the size
of maybe a toy, you know, battery controlled
(48:58):
ship.
But everything else is built the same way.
It's built with the same components, but down
on a different scale.
So, you know, we've talked about, you know,
remote control boats and things like that, just
like remote control cars.
But think of it like this.
Imagine that boat being built with industrial grade,
(49:20):
military grade, right, material so that it could
go through the different tests.
They could test to see if it would
go under and they could test things on
the scale.
And so when you test things on a
scale, you know that when you get on
a higher level, you know that if you
keep within certain tolerances, mathematics, it's going to
(49:43):
be the same.
And there are some times when you have
to test it to a larger scale to
a certain point, it might be 10%, it
might be 20%.
And then you know that you've got something
that'll be pretty precise to that level.
And that's the thing that is always impressive
about AI is that we can keep using
it to generate data.
Today, I was doing a practice experiment with
(50:04):
AI, and I told it to grab me
all the phone numbers and all the businesses
within a certain range of me.
I will tell you that it did not
get it correct the first time.
It only gave me 10 businesses.
Then I'm like, well, what happened to the
phone number?
And where's the contact?
And where's the type of business?
Whether it's a coffee house, whether it's a
(50:24):
clothing store, whether it's a food store, like
where is all that?
How to do it again?
By the time I had to do it
on my fourth time to finally give it
all what it needed, guess what happened?
It says, sorry, you're out of tokens, and
you need to upgrade.
I need to upgrade.
So that's the frustration that I think a
(50:45):
lot of people don't realize is how that
works.
So people ask you, so how much is
ChatGPT 5?
So it's not going to break the bank.
So ChatGPT is free for all users, although
it has usage limits.
So paid users receive access to GPT 5
with higher usage caps.
Advanced features like unlimited GPT 5 pro and
(51:08):
early access to new tools starts at $20
a month, plus a plan and higher tiers
like pro and team plans.
Then you have things like API pricing, so
you can get in and they have things
like priority processing.
So you have batch pricing and things like
that.
So they have a GPT 5, the best
(51:29):
model for coding and different types of agenic
tasks across industries.
And that roughly works out to be $1
.25 for 1 million tokens.
Okay.
Cashed input, 0.125, because we know sometimes
(51:49):
we need to save our cash, right?
Output, 10,000 basically dollars gives us 1
million tokens.
So that's a difference, right?
So input being 1 million, right?
Input of cash being one thing, but then
(52:09):
an output of 1 million tokens, that means
1 million concepts, $10,000.
You get how this works?
So ChatGPT means to simplify things, okay?
Working with text is very easy and you
just got to be clear and specific about
what you want.
That's probably the most important thing.
ChatGPT 5 mini is basically $2,000 for
(52:33):
1 million tokens output, 0.250 for 1
million tokens and 0.025 for 1 million
tokens on the cash side.
Faster, it's cheaper than GPT 5 for well
-defined tasks.
And then we have 5nano, cheaper version of
(52:54):
ChatGPT, great for summarization and classification of tasks.
So again, I think, and then if you
go to ChatGPT 4, 1 million tokens is
$3 and 1 million on the output is
$25.
So you see how that kind of changes
(53:15):
around a little bit?
And I think if we start understanding that
ChatGPT is meant to give us tokens, that's
how we view it.
So there's another one I've used, it's called
Venice, Venice AI.
And now Venice doesn't give you all of
the information that ChatGPT does.
(53:37):
But Venice prices, Venice AI prices is a
little bit different.
So the pricing for Venice is they have
a $0 option, 25 text prompts per day,
15 image prompts per day.
At the Venice pro level, you get private
text, image and code, you get advanced AI
(53:57):
models, you get character creation, you get free
tier API access, you get unlimited prompts, you
can remove image watermarks, high resolution upscaling, and
you can disable what they call the mature
filter.
But what I've noticed about Venice is that
Venice doesn't, it doesn't really work well with
(54:20):
your input.
So if let's say, for example, you're giving
it a file.
So ChatGPT is very good when it comes
to working with input because you can give
it a file and now you can tell
it, hey, do this for me based on
this.
And so they don't really get into the
tokens so much, but a Venice token is
(54:41):
$3.58. So it's hard to understand how
they all deal with tokens because all of
these models, okay, they use tokens different ways.
So they have what they call private AI.
It's a privacy architecture keeps your AI prompts
(55:01):
100% private and all data stays on
your device and not on the servers.
Then they have uncensored AI.
They offer the most advanced, accurate, uncentered models
for truly unrestricted AI experiences.
But again, I don't want you to think
that you could just do anything you want
with them.
There's got to be some limitations making sure
you don't cause some big, big problems.
So we never know where we're going to
(55:21):
go with things.
I wanted to kind of talk a little
about AI because I know that's really coming
up.
And like I said, tech is at a
turning point right now.
I think it's got a lot more things
that it's got to do before it turns
the next corner, including where we're going with
quantum physics and quantum mechanics.
But I think at the end of the
day, if we all realize that AI is
(55:41):
a tool, and if we make the commitment
to use AI as a tool, that'll be
used for the greater good of all concerned,
guess what?
It's going to be a benefit for everyone.
But of course, people can always use things
and abuse them.
So this is why a lot of the
free AI with tokens has to be restricted
(56:04):
because we don't know what these people are
doing.
Now what Microsoft, it's actually called Microsoft Copilot.
So Microsoft Copilot costs, they're just a little
bit different.
So theirs is $30 per user per month.
(56:28):
And so that doesn't sound too bad.
The others were like $18, $19.
But they don't actually talk about tokens.
So it's interesting because somewhere down the road,
somebody is, you're going to sacrifice something.
Somebody is going to say, look, well, you're
out of your token limit.
(56:48):
I was floored that I only went through
five practice problems.
There were literally five unions and intersections and
some other set questions.
I think I did, maybe I did like
six of them or seven of them.
And it already told me that I was,
(57:09):
I think I got 10, that I was
out of scope already.
And it wanted me to pay.
So Audible will tell you, you know, you'll
be free at 7.30 at night, you'll
be free the next morning.
And most people that are using AI are
okay with that.
And there's other AI systems you can use
like even on DuckDuckGo, but that one's very
limited.
I mean, they cut you out on DuckDuckGo
(57:29):
like really, really quickly.
So chat can do some things, but again,
be mindful of the tasks you get it
because the amount of output you ask for
is where it's going to be a restriction.
And that's important to understand.
And so the more that we understand AI,
(57:49):
the more that we can use it intelligently,
the more that we can help others use
intelligently.
And the more that we can, let's say,
see it less as a threat, but more
as something that can aid us in our
lives.
Pretty cool, right?
Ladies and gentlemen, I'm John C.
Morley, serial entrepreneur.
Do check out BelieveMeAchieve.com for more of
(58:09):
my amazing inspiring creations.
I'll catch you real soon.
Have a great night and a great weekend.
We'll see you on the next week's show.
Check us out at BelieveMeAchieve.com.