Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:07):
Hi everyone, I'm John C. Morley, the host of
The JMOR Tech Talk Show and Inspirations for
Your Life.
(00:50):
Hey guys, how are you?
It is John Seymour, your serial entrepreneur.
It's great to be with you on The
JMOR Tech Talk Show.
It's Friday, guys, and that's what it means.
It's time for the show.
It is July 11th, 2025.
Welcome.
It is a wonderful Friday, and we've got
a lot to share with you here on
The JMORTech Talk Show.
(01:12):
Hey, if you're thirsty or hungry, feel free
to go get something and make yourself comfortable.
I got my RO water here, so I'm
all set, and hopefully you guys will be
set too.
So definitely check that out.
All right, guys, check out BelieveMeAchieved.com for
more of my, of course, amazing, inspiring creations,
which I know you guys are not going
(01:33):
to want to miss, and you can do
that 24 hours a day, incidentally, after the
show is over, that is.
All right, so our topic for this week,
ladies and gentlemen, is techs wow this week,
yet layoffs leak, and AI takeover, series four,
and show number 28.
So a big welcome to everyone.
If you are here, basically your very first
(01:57):
time, I would definitely like to take this
opportunity to, well, I'd like to welcome you
to the show.
If you're coming back to the show, well,
I'd like to welcome you back.
It's always great to have people that have
been with us before.
So without any further ado, why don't we
kick the show off, all right?
So ladies and gentlemen, I want to welcome
(02:17):
you again, formally, to another info-packed episode
of The JMOR Tech Talk Show with me,
John C.
Morley, serial entrepreneur, video producer, podcast coach, host,
graduate student, and a lot more, engineer.
And it is great to be with you.
And I'm also the guy who makes tech
fun, informative, and actionable.
(02:40):
Whether you're a tech geek, just a curious
mind, or just trying to keep up with,
well, the latest buzz in the digital world,
which we all know that that can be
a little bit hard for some people to
do if you're not in the industry.
This episode is packed with stuff to blow
your mind and spark your great conversations that
you have with people.
(03:01):
So let's get into the very first one,
which is the TikTok deal.
Yes, the TikTok deal talks resume, and Trump
restarts the talks with China.
The approval yet is still uncertain.
The question is, is TikTok launching a new
app for the US?
(03:22):
And the answer to that is yes, they're
preparing to launch a new app for the
US users, and it's expected to operate on
a separate algorithm and a data system from
its global app, laying the groundwork for potential
sale orchestrated by the US president, Donald Trump.
And this is according to many people that
are familiar with this matter.
(03:43):
The question is, what's going to happen?
According to different people, TikTok's parent company, ByteDance,
is reportedly building a new app that could
replace the video app in the United States.
And the information has reported that the negotiations
to sell TikTok are entering the final stretch.
(04:04):
Well, we don't have anything in writing, but
they're saying that this is true.
And they're saying that we're going to have
an app by September sometime.
And if you don't switch over to the
new app, I think it's March of 2026,
well, you're not going to be able to
use the previous app in the United States.
So that's what's going on there.
(04:25):
I know everybody's just been kind of up
wondering, like, what the heck is going on?
What are they doing?
And, you know, like I said, President Trump
has announced that the US is beginning the
talks, and that it's a lot more than
that, and that the deal is pretty much
in place.
But again, they're working on China's approval, and
this is coming after several extended deadlines for
(04:47):
ByteDance to divest TikTok's US assets, and as
earlier, efforts to spin off TikTok's US operations
stalled over tensions with, we know, China.
So we're just going to have to see
what's going on there, and we will keep
you in the loop, all right?
All right.
Louis Vuitton had experienced recently a data leak
(05:08):
in the Korea branch, and it was hit
by a cyber attack, and a lot of
contact was exposed.
So the Louis Vuitton Korea data leak had
a lot of cyber, let's say, data that
was breached.
And Louis Vuitton Korea announced that the system
breach happened in June.
It led to a leak of some customer
(05:28):
data, including contact details.
But they claim not financial information, and the
company discovered the unauthorized access just recently, and
notified authorities, and has taken steps to contain
the breach and strengthen security as other LVMH
brands in South Korea face similar data investigations.
My question is, why did it take so
(05:50):
long, guys, for this to happen?
What was the deal?
You know, I think this is really crazy,
like, you know, what's been happening.
But this is, when I say crazy, this
is absolutely insane, you know, what's been going
on.
So we'll definitely keep you in the loop
with this.
But more companies are going to have problems,
(06:12):
you know, whether it's in the US or
China or around, because a lot of people
are being complacent about their security, and they
think that they're all that.
And I'm always here to tell you that
it's not a question of if you're going
to get attacked, but when, if you're not
properly protected.
So again, these are some very interesting things
(06:35):
that are, you know, going on.
And we're just going to have to see
how it plays out.
A lot of people have these egos, and
they think that, you know, their network is
the best.
And I have to tell you, I think
we have a great network, but I never
will say that we never have to test
or that we don't have to, you know,
do some penetration testing and make sure that,
(06:56):
you know, people can't get in our network.
Because, you know, there's vulnerabilities, right?
This happens every single place.
And so you've got to be aware of
this.
It's the people that don't want to be
aware of it that are going to have
the most problem.
And Microsoft, this is a real ouch here,
cuts 9,000 jobs.
That's layoffs that hit Xbox and the sales.
(07:19):
And they're focusing the shifts to AI and
cloud because of this.
But is that really fair for all the
people?
Well, Microsoft's biggest layoff in years hitting the
Xbox, Microsoft cutting, like I said, 9,000
jobs.
It's largest layoff in over two years, impacting
Xbox sales and other divisions worldwide as part
(07:40):
of the efforts to streamline management and focus
on AI and cloud growth following several rounds
of layoffs earlier this year amid heavy investments
in AI infrastructure.
And I think, you know, they've got to
get a little bit leaner because if they
don't, well, it's going to be a problem.
And, you know, Microsoft's not one of these
companies that say, hey, you know, we're planning
(08:01):
to lay people off.
They just do whatever they have to do,
which I got to tell you, that's not
the greatest thing.
And if you've been wondering what's happened, well,
Skype is defunct.
It is no more.
Skype has shut down.
It's retired as of May 5th, 2025.
We knew it was coming and users are
switching to other platforms, Zoom, Teams and other
(08:24):
things.
But, you know, the reason that they did
this, I mean, I think this is the
big thing, you know, why everybody asks me
this question every day, John, you know, why
this is a big why did Microsoft kill
Skype?
I mean, why?
It wasn't a bad platform.
They retired the consumer version of Skype in
favor of Microsoft Teams to streamline its communication
(08:44):
and really to save costs.
I think that was the big reason.
So Microsoft officially shut down Skype, as I
said, on May 5th, pushing millions of users
to find alternatives like Microsoft Teams, Google Voice,
Viber, Zoom and various other voice over IP
services, while the Skype numbers could be ported
(09:05):
to other providers up until June 31st.
And so this has been a little bit
challenging for some people.
They were relying on U.S. based numbers
and they faced challenges.
But numerous options had existed for call messaging
and still do.
And number parking to ease the transition.
(09:26):
But, you know, they mentioned this was happening
and then, you know, you can't just wait
to the last minute to do something.
Right.
That's what everybody does.
Like, oh, my gosh, like what happened to
Teams?
So what they did is they were warning
you.
And then when it was getting to the
end, they pretty much were telling you how
to click here to go over to Teams.
And if you did it afterwards, well, then
(09:47):
the account would just convert over to Teams.
They still have the data, but, you know,
it just wasn't going to launch Teams anymore.
So, you know, that's an interesting thing of
what happened there.
We're just going to have to see what
happens, you know, but they're putting all their
money into Teams.
And catch this, ladies and gentlemen, I talk
a lot about artificial intelligence, don't I?
(10:10):
But I don't think I've ever talked about
artificial intelligence, recall, being in a sewer.
You're like, what, John?
Yeah.
AI stops sewer floods.
The United Kingdom uses smart sensors to reduce
flooding by up to 40 percent, they claim.
(10:30):
So their whole goal is AI sewers will
stop floods before they happen.
AI powered sensors in southern water sewer systems
have helped prevent flooding in West Sussex by
detecting blockages like Fatberg's early allowing prompt action
before water from the waste spills into homes
(10:53):
and gardens.
And this AI technology monitors sewer flow patterns,
distinguishing normal fluctuations from problems and has reduced
internal flooding by 40 percent and external flooding
by 15 percent, marking a significant advancement in
proactive flood management.
Now, the question everyone's asking is, so how
do the AI, you know, sewer systems work?
(11:19):
So basically, they are trained to detect certain
abnormalities in case pipe features and defects and
flows and stuff like that, and they can
respond.
Again, it's intelligence in sewer systems is not
new.
They started this around 2023, but it wasn't
(11:39):
really catching any traction back then.
And so, you know, they have different sensors,
they can use cameras in the pipes and
they can detect things like flow.
They can detect quality of water.
They can do a lot of different things.
But, you know, the cost of this, this
(11:59):
is the real thing.
You know, does the cost of an AI
sewer system make it worth it?
Well, that's a good question.
The return on investment on an intelligence system,
it's not cheap.
(12:21):
All right.
And so many people are wondering, you know,
because we've had about AI to detect sewer
pipe defects and things like that, but we're
talking about just things like the flow and
how to open certain valves or gates based
on how much is flowing or not flowing.
(12:43):
And if they detect something that is getting
too much, then they can shut things down,
right, so that they don't get flooding.
It works very similar to the way a
dam works, but a lot more integrated and
a lot more intense, because the situations have
to happen very robustly, because if they don't,
(13:05):
it could cause problems and then they could
have even more of a flood damage on
their hands.
So we'll definitely keep in the loop about
what's going on with that.
But AI sewer systems, I guess, AI sewer
flood detection systems, we're going to see a
lot more of this.
(13:25):
They're doing things now like sewer mapping.
They can inspect ten thousand feet of sewer
per hour with industry's different AI solutions now.
And so we're doing it for the testing,
but now we're doing it for things like
flood warning systems and flood redirection systems.
(13:45):
So, again, this is more the flood prevention
system.
And so the fact that this technology and
this technology is learning.
So basically what they're doing is they're using
AI to reduce flooding and also limit river
pollution.
And there have been lots of collaborative projects
to make this thing go where it needs
(14:08):
to go.
But I think a lot of people don't
realize that no system is perfect.
Right.
And so they've got to keep getting enough
data in there.
And that could take a while.
That could take a while.
There could be lots of different combinations to
how the water could be.
And that could really affect things.
(14:30):
And I think that's something that we need
to take into account.
So the tattoo tech debate, BlackDot's ink tech
blends automation, but it isn't fully robotic.
And is it a tattoo machine or isn't
it a tattoo machine?
Because this is a very interesting thing.
They say it's not a tattoo robot.
(14:51):
But wait, what is it?
So it's an automatic tattooing device and it
isn't a fully autonomous robot, but it's a
machine designed to assist with tattooing.
Currently, it's used by some studios like New
York's Bang Bang for Simple Text Tattoos.
While it sparks debate over automation and tattooing,
it reflects growing tech innovations in a booming
(15:12):
tattoo market where nearly a third of the
U.S. adults have tattoos, a blending tradition
with new technology.
So the thing is, you know, whenever you,
let's say, introduce technology, you know, you wonder
how well is it going to be received?
And, you know, I think that's an interesting
(15:35):
concept.
Right.
So I should tell you, the BlackDot tattoo
technology employs a digital marketplace and a specialized
device to analyze skin and create a precise
tattoo that comes with design fees ranging from
$400 to $8,000 and execution fees ranging
from $600 to $1,850.
(15:55):
And these prices can vary depending on the
complexity of the design and the artists involved,
with some designs costing up to $10,000
per design.
So you're probably still wondering, like, you know,
what the heck is this device?
So it's it's they claim that it's not
a tattoo device, but in some regards it
is because it's an Austrian based startup that
(16:20):
it's blending technology with body art to transform
how tattoos are designed and delivered.
It was founded by an entrepreneur, Joel Pennington.
The company had developed the world's first fully
automated tattooing device and online marketplace for tattoo
art.
The innovation dubbed a new way to tattoo.
(16:40):
Joel Pennington came to the tattoo industry with
an unconventional resume.
He had already built a successful tech startup
in telecommunications in 2005, and that was acquired
by Cisco.
And after relocating to Austin post acquisition, he
indulged his passion for craft industries.
So he had something called the Eureka moment.
(17:02):
Pennington's leap from coffee to tattoo began almost
serendipitously in 2017, and he joined a Bay
Area coffee tech startup, Bellwether, that used a
hardware software cloud model to automate coffee roasting,
exposing him to cutting edge ideas and automation.
While mentoring another startup that year, he met
(17:23):
engineer Yen Azdoy, whose expertise was in hyper
elastic surfaces, something big, something transformative, and it's
way beyond coffee.
So it's kind of amazing what they did
and what it does.
And so the system basically reads the skin's
(17:47):
characteristics and auto calibrates for optimal needle pressure
and ink deposition.
The thing that happens next is once it's
calibrated, the robot arm executes the tattoo design
dot by dot.
Each micro puncture is only, they're saying, less
than 250 microns in diameter, enabling extremely high
(18:09):
definition results beyond the capability of a human
hand.
And they're able to tattoo grayscale dots with
surgical precision.
And so this allows BlackDot to execute extremely
detailed designs that cannot be tattooed by hand.
The machine's approach is often likened to a
futuristic stick and poke method, since it places
one dot deliberately at a time, unlike a
(18:31):
traditional tattoo gun that requires constant wiping of
the excess ink.
BlackDot's device applies continuous suction to remove excess
ink on the fly, very similar to the
wipe, think, tattoo, repeat method.
Clients sit in a modern ergonomic chair resembling
a sleek dentist or lounge chair while a
robotic arm moves methodically to imprint the design
(18:54):
throughout the system's software.
It ensures that each dot is placed exactly
per the digital design blueprint.
The high technology mythology results in tattoos that
can capture minute details, fine art, photographs or
intricate geometric patterns with a level of consistency
and clarity that astonishes seasoned tattooists.
(19:14):
In fact, veteran artist Steve Godoy described a
tiny BlackDot crafted Mona Lisa tattoo he saw
as a category all on its own.
It's in its own specific art form.
And so.
Although it's not replacing the tattoo world, it
is trying to, let's say, redefine the tattoo
(19:36):
world again, it's all dots.
So interesting thing that they have come up
with, and, you know, it's it's something you
don't hear about every day.
Right.
I mean, the fact that, you know, it's
reflecting this growing tech in a new kind
of innovation in a booming tattoo market and
blending tradition with new technology.
(20:00):
I think that's that's pretty cool.
And Character.ai now decides to add kids
safety for the new CEO that launches parental
controls and claiming these fun features.
So Character.ai's new CEO, Karandeep Amidad, is
(20:20):
focused on tackling concerns over kids using chatbots
by doubling down on safety measures while pushing
the platform as a space for creative, interactive
entertainment.
They're doing this because they don't want to
get sued.
Drawing on this experience that he had with
Microsoft and Meta, Anon plans to balance trust
(20:42):
and fun by refining the filters to better
understand context like harmless vampire roleplay, et cetera,
without being overbearing, adding parent oversight options and
preventing misuse of new features like AI video
generators.
At the same time, he aims to grow
the creator community and differentiate Character.ai from
(21:05):
rivals by making AI conversations more social and
engaging.
Will he be successful or will this turn
into another disaster security breach two to five
years from now?
We'll have to see.
And ladies and gentlemen, ICE.
I'm sure you guys have heard of ICE
before.
People say, you know, what is ICE in
(21:27):
the U.S.? So ICE stands for Immigration
and Customs, basically, enforcement.
They're called ICE for short.
The United States Immigration and Customs Enforcement is
a federal law enforcement agency under the United
States Department of Homeland Security.
ICE's stated mission is to protect the United
(21:48):
States from transnational crime and illegal immigration that
threaten national security and, of course, public safety.
So what I want to share on this,
I think is going to be pretty interesting.
Well, now that you know a little about
ICE and ICE's job is to basically arrest
(22:08):
people that have come to this country illegally,
illegal immigrants, basically.
And so the problem is, is that ICE
has always been trying to deport people that
are here illegally.
But there are lots of people.
How do they do this?
So now they have done a lot of
different research using AI.
(22:31):
They will monitor, let's say if they're staying
with somebody, they'll monitor their credit cards and
their different usage, even their phones.
And they'll be able to figure out a
pattern, like do they go to a certain
store at a certain time?
In fact, there have been several arrests and
they've all been done by some very strategic
data gathering, something that wasn't available before, all
(22:53):
because of artificial intelligence.
So Iceblock app is something new and alerts
users on the iPhone and it warns of
nearby ICE raids.
They claim it's about privacy first, but I'm
not really sure.
Joshua Aaron, a longtime tech professional, created Iceblock,
(23:16):
an iPhone app that lets users anonymously report
nearby ICE activity to help communicate and avoid
immigration enforcement encounters amid Trump's deportation crackdown.
So the app does require people to enter
that they're there or let's say select that
(23:38):
they're there.
It's probably like a simple button.
It's not like, you know, just pressing a
button, letting them know that they're in this
area and they pretty much they can just
GPS track that and they can alert people
to where it is by just clicking the
button and where you are.
And so it was launched in April of
this year, 2025.
It's called Iceblock.
It already has close to 20,000 users.
(24:00):
Wow.
And it's in places like Los Angeles and
it works as an early warning system, sending
alerts when ICE agents are spotted within five
miles.
The app supposedly emphasizes privacy.
It collects no user data and aims to
inform not to interfere.
(24:21):
While Aaron says the tool offers vital protection,
ICE officials, well, they argue that it endangers
law enforcement by increasing risks to officers.
Just a few days ago, there was a
shooting that happened.
Was it because of the app or was
it just because of some other reason?
But the thing is this, I get why
(24:43):
the app was developed, but, you know, if
you're doing something wrong, you know, you're going
to get caught.
So the fact that somebody is building an
app, I am just surprised that.
Apple would even allow something like this on
their platform, because literally what you're doing is
you're well, you're you're kind of bypassing government.
(25:06):
And that's not sending a very, very good
message.
So a lot of people have asked me,
John, you know how?
And it's a good question.
How does the ICE block app work?
And it's pretty simple.
It functions as a user reporting system.
And basically users can open the app.
(25:26):
They can pinpoint the location of an ICE
sighting on a map and add optional details
like agent descriptions or vehicle information.
Once a report is made, users within a
five mile radius receive a push notification alerting
them to the sighting.
The app is designed to be anonymous with
no personal data collected or stored, making it
impossible to trace reports back to individual users.
(25:48):
Reports are automatically deleted after a set period
of time, which, for example, is like four
hours to limit the potential for misuse and
ensure real time information sharing.
The app has measures in place to prevent
spamming and false reports such as requiring users
to select from a list of valid addresses
or tap on a map to validate the
location, according to some researchers.
(26:10):
So the thing about it is the ICE
app itself, they claim is something that is
basically, you know, it's something that is going
to make the world better.
And, you know, the app came up by
a company called, I think it was called
All Chart You.
(26:31):
So ICEblock is it's an iPhone app.
It's intended to alert users of nearby ICE
sightings, as I was mentioning to you.
And, you know, anybody can download it is
obviously it is a free app.
But are we sending the wrong message that
we're allowing an app like this to even
be out there?
I mean, because what we're saying is it's
OK to get around things.
(26:52):
They claim it's 100 percent anonymous, which, you
know, that's all nice.
They claim that the app doesn't collect any
data about the person or the phone and,
you know, things like that.
Facebook first got into the data sniffing business
when it acquired Anovo for around 120 million
in 2014.
The VPN app helped users track and minimize
(27:14):
their mobile data plan usage.
It also gave Facebook deep analytics about what
other apps they were using.
Internal documents acquired by Charlie Warzel and Ryan
Mack of BuzzFeed News revealed that Facebook was
able to leverage Anovo to learn that WhatsApp
was sending more than twice as many messages
per day as Facebook Messenger.
(27:35):
Anovo allowed Facebook to spot WhatsApp metrics rise
and justify them paying 19 billion to buy
the chart startup in 2014.
WhatsApp has since tripled its user base, demonstrating
the power of Anovo's foresight.
But again, the thing about this is they
meant this app to be very easy to
(27:57):
use, but it is causing problems.
The ice, the ice block, for many people
are saying the ice block is causing, you
know, agents to be harmed.
And I don't think that was the intention
(28:18):
of the app, but it could cause federal
agents to be injured.
They're saying over a 500 percent increase in
ice agent assaults.
And so it's designed to be an early
warning system.
But there are groups on Facebook and lots
of the platforms that are just like, oh,
yeah, you got to do whatever you got
(28:39):
to do.
Right.
But I think this is causing a problem.
And when we think about why it was
set up, why immigrations and custom enforcement was
set up, you know, it was set up
because they want to keep the place safer.
(29:02):
And, you know, immigrations and custom enforcement has
grown so much.
Department of Homeland Security and custom enforcements, they're
carried out through more than 400 federal statutes
and they focus on smart integration enforcement, humane
(29:23):
detention, preventing terrorism and combating illegal movement of
people and goods.
So all the reasons they do that, that's
not up to us.
But I think if an app is out
there and the app is trying to keep
people safe, that's one thing.
But if the app is trying to allow
people to hide so they don't have to
(29:44):
pay space the piper, I think that's a
problem.
Just my opinion.
And ladies and gentlemen, NPR and PBS are,
let's say, up to their neck in some
problems as funding cuts could be eminent as
the House votes now to defund the public
(30:05):
media.
Senate decision is going to have a big,
big impact on this.
The House narrowly passed a bill to eliminate
two years of federal funding for public media,
including NPR, National Public Radio and the Public
Broadcasting Service at President Trump's request, citing claims
(30:26):
of liberal bias, a measure which also slashes
billions in foreign aid past 2014-2012, with
two Republicans flipping their votes.
Critics warn of devastating impacts on rural stations
and emergency services, while supporters argue that taxpayer
dollars shouldn't fund biased media.
(30:48):
The legislation now heads to the Senate, where
its fate remains uncertain as debates continue over
public media's role and funding in a polarized
political climate.
Now, we've heard the story many times before,
you know, that they always want to broadcast
basically what bleeds because that'll lead.
(31:09):
But that's not the right thing.
Being in media for a long time, I
can tell you that it's important that we
put information that is truly going to be
respected, that it's going to be truthful.
I think that's hard for a lot of
people to understand.
And a new innovation, solar cells for space,
(31:29):
a new ultra thin type of solar panel
glass could now power the future.
This is really interesting.
Scientists are developing ultra thin glass cadmium telluride,
CDTE solar cells that could transform space energy
systems by offering lighter, cheaper and radiation resistant
(31:51):
power for satellites and space manufacturing.
It was tested on a CubeSat and the
technology targets 20 percent efficiency in space and
already 23.1 percent on Earth and promises
longer service life and lower costs than traditional
silicon or multi junction solar cells.
(32:12):
That's pretty amazing.
With the global space industry booming and satellite
constellations expanding, this innovation could meet soaring demand
for scalable, efficient solar power.
So I think that's an amazing thing that
they have come up with that whole concept.
And the thing is, by us coming up
with this concept, this could actually not only
(32:35):
lower costs, but it could improve sustainability of
our world.
So I think that's a really important thing
to keep an eye on.
And ladies and gentlemen, this is a very
interesting one.
China's robot school, virtual reality trained bots.
That's right.
They have learned and are continuing to learn
(32:58):
tasks like human things, things that humans will
do all the time.
And I think this is pretty interesting because,
you know, knowing what China is doing and
what they're trying to do, China is the
first public robot school in hi-fi that
is training robots to perform real world tasks
like logistics, warehouse handling, and home assistance by
(33:20):
using virtual reality guided human teachers to teach
fine motor skills.
Robots practice two more action sequences daily, learning
through physical interaction rather than simulation to adapt
to unpredictable environments.
The facility offers shared infrastructure and services, helping
smaller companies access advanced training and aiming to
(33:40):
create more capable, general purpose, autonomous robots for
factories, homes, and retail.
I guess we've got to see what they're
going to do, but I think that's going
to be a very amazingly big, big push.
And how about this?
Let me catch this.
1500 flights were grounded.
(34:01):
That's right.
Airlines explode over a massive French air traffic
controller strike.
A massive strike by French air traffic controllers
led to the cancellation of over 1500 flights
in just two days, disrupting travel plans for
tens of thousands of passengers.
Now the strike was driven by demands for
(34:23):
better working conditions, concerns over understaffing amid rising
air traffic and opposition to new biometric attendance
systems that caused about 40% of flights
in and out of Paris to be canceled.
Airlines, including Ryanair expressed strong frustration over the
frequent strikes, which will also affect over flights
(34:46):
and cost significant disruptions.
French authorities and unions remain at odds with
the strike happening at a peak travel time
as schools closed for summer, highlighting ongoing tensions
in aviation sector, which has been a very,
very big problem.
So I got to tell you about one
of my challenges.
So I was actually leaving last week, Thursday,
(35:07):
to visit my parents in Florida.
And so my plane's supposed to take off.
I think it's like, whatever time, I think
it's supposed to take off at 7 30.
Well, anyway, they changed my gate like at
least three times.
And then just before I'm about to get
on the plane, we knew there was, there
was a delay and I'm watching the gate
and all of a sudden I'm seeing, uh,
(35:29):
I'm hearing that, you know, you've got to
have, you don't have to have your password,
but they will scan.
I'm like, what are they talking about?
Then I look, I'm like, this plane's boarding
to Frankfurt, Germany.
And I'm like, how did that happen?
I was going to Fort Myers.
How did it suddenly change?
And so then I'm like, well, where's my
plane?
And then I see last call on the
phone boarding.
I'm like, where's my plane?
(35:50):
So I'm running down.
I'm like, oh, it's down like four or
five gates down.
I'm running to the plane.
They're like, oh, you have time.
We're not closing the door for five minutes.
And so like, why didn't I get that
notification?
But here's the thing.
When I got on the runway, well, I
get the plane and then we're on the
runway.
They, uh, apparently you think they would think,
right.
They did not have fuel in the plane
(36:11):
for very little.
So they got fuel, but then they only
got enough for half a tank.
The pilot said we probably can get to
Fort Myers with a half a tank, but
he'd prefer having a full tank.
I would too.
So now we have to wait some more
time for this, uh, fuel embark to get
to us, but there's lots of jets in
(36:32):
the way.
So how do we get our fuel when
these jets are in the way?
Right.
So we have to wait.
That was another delay.
We got our fuel.
So now we're number 32 or 34 in
the lineup to depart.
I mean, that was just kind of like,
that was kind of amazing that, that, that
was taking like, you know, so long.
(36:53):
And then the problem that happened was we
had to wait because we're number 32.
Now, the thing I didn't tell you is
that all these other planes, well, they couldn't
take off at least, let's say more than
30 of them couldn't take off because their
flight plan had been canceled, like, that's just
(37:17):
insane.
Their flight plan had been canceled because of
the bad weather.
I mean, that's what it was coming down
to.
Right.
And so now I'm watching a movie on
the airplane, literally the entire movie, an hour
and 45 minutes or something.
And it's funny.
We take off about an hour late.
(37:40):
No, we take off about, no, we take
them about an hour and a half late
and we still, we still get to the
destination about an hour late, so they made
up a little bit of time.
And so I think it was just very
interesting how this worked.
The flight was a pretty good flight, but
I just don't think that the airlines like
have all their ducks together.
(38:00):
I mean, that's just my honest opinion.
And micro lead testing is a pretty big
breakthrough as new probe now test fragile wafers
safely.
So what's all this about?
So a soft probe breakthrough for micro LED
testing.
Um, and so researchers at the tie engine
(38:22):
university have developed the world's first soft, non
-destructive probe for testing micro LED wafers applying
only 0.2 MPa of pressure comparable to
a gentle breath.
This flexible 3d probe array adapts to the
wafers delicate surface, preventing damage and extending probe
(38:43):
lifespan, which addresses major hurdles in scaling up
micro LED production.
The technology enables precise high throughput quality testing
critical for next generation, ultra bright energy efficient
displays used in TVs wearables, and more and
more, it could expand into flex electronics, medical
(39:03):
devices, laptops, et cetera, revolutionizing the industry for
wafer inspection and accelerating commercial micro LED production
in the factory.
That's pretty interesting.
You know, um, so, uh, one thing I
(39:26):
thought that's kind of interesting to share with
you is a little bit how technology works.
Okay.
So, um, we start off with something called
an ingot.
So an ingot, um, silicon is melted and
then, uh, let's say a seed crystal is
slowly pulled out rotating as it's withdrawn to
(39:46):
form a large, um, uh, uh, cylindrical, uh,
mono crystalline ingot.
And so, uh, the ingot is then sliced
into thin wafers using a wire saw or
an other specialized cutting tool, like an inner
diameter saw, uh, they lap and polish and
(40:08):
go through over 30 to 60 different processes
to get the wafer ready before testing.
The slice wafers are, like I said, they're
lapped to remove any damage from the slicing
process and then polish to mirror finish, ensuring
a smooth surface for further processing.
But that's not five or six steps.
I mean, you're talking a lot of steps
that that has to go through.
The wafer is then thoroughly cleaned to remove
(40:29):
any, uh, remaining debris or contaminants.
Depending on the desired application, the wafers may
undergo further processing, uh, such as, um, dopping
to alter their electrical properties.
Um, so they go through a lot.
Um, and so, um, dopping, um, you know,
(40:52):
in electronics is probably a term maybe you
have not heard before, uh, is the process
of intentionally, uh, introducing impurities into a semiconductor
material like silicon to alter its electrical, uh,
conductivity because semiconductor, it's not fully able to
conduct.
So we need to add something to it.
(41:13):
The process is crucial for creating transistors, diodes,
and other essential components by adding specific impurities,
the conductivity level increases of the semiconductor, and
it can be precisely controlled, creating either an
N-type of excess electrons or P-type
for excess holes.
And, um, the thing about this process that
(41:34):
many people don't realize is that yes, they
can get the costs down, but the cost
really comes from the design.
And so the more you run, it doesn't
matter whether you run a small or big
one, the more you run, the more the
cost comes down, right?
So R and D is very expensive.
And I thought you guys would appreciate, appreciate
(41:56):
those.
And people ask me this all the time,
John, you know, what does, uh, an ingot,
um, ingot cost?
So, um, the cost of an ingot varies,
uh, based on the metal purity size, the
current market conditions, iron ingots, for example, can
range from 300 to 500, uh, per ton.
(42:18):
Um, so that's an important thing, but how
much are ingots costing, let's say with, you
know, semiconductors.
So, um, when we talk about ingots for
semiconductors, uh, they range as well, um, from
under $10 to a hundred dollars, while a
12, a six inch, I say, silicon wafer
can cost from under $10 to a hundred
(42:40):
dollars, while a 12 inch wafer can cost
up to $5,000.
So, um, the thing is, you know, when
you think about this, uh, high purity silicon
ingots can range from 1200 per ton, um,
which is not cheap.
Silicon carbide wafers can range from a hundred
to $1,600 per wafer.
(43:03):
And quartz ingots will, they're even, they'll range
from one to $110, depending on the type
of supplier they're coming from.
So, um, ingots are what's needed to produce,
uh, lots of different things.
And they're used a lot in the technology
industry.
They're using the iron industry, but they're like
the initial part of how things get kicked
(43:26):
off, how chips get made.
We start with that ingot.
So it's a really, it's a very, very
interesting process.
And I'm just giving you a real short
overview of it.
And here's where I think you guys are
going to like, uh, well, AI tackles nuclear
waste, uh, new compound removes 90% of
iodine from waste.
(43:46):
Um, I think that's, uh, a pretty interesting,
um, thing.
Uh, and so a little more on that
track is that AI is crushing nuclear waste.
90% of radioactive iodine is now being
eliminated.
That's a, that's a pretty, um, that's a
(44:08):
pretty amazing thing.
Um, researchers at South Korea, Keist have used
AI to discover a new multi-metal compound
capable of removing over 90% of radioactive
iodine.
Um, which is I dash one 29 from
contaminated water, addressing a major challenge in nuclear
waste cleanup by applying machine learning to efficiently
(44:30):
screen layered double hydroxide LDH materials.
The team has identified a copper chromium iron
aluminum compound that outperforms traditional absorbance, significantly
speeding up and reducing the cost of developing
effective decontamination solutions, this breakthrough holds promise for
safer nuclear waste management, and as advancing toward
(44:51):
commercial use with ongoing efforts to improve stability
and create practical filtration systems.
Wow.
So AI is definitely making some progress.
Unfortunately, what we're seeing is a lot of
(45:12):
people that are seeing the progress are not
looking at it from the right perspective, they're
looking at it from the view that they
want to make money, but they're not looking
at it from the point that it should
only be used and create as a tool
for the greater good of all concerned.
(45:33):
A lot of people just don't see that.
Why?
Because they see AI and they see some
way to make money.
And I think that's a big problem for
a lot of people is that AI in
itself, AI is not good or bad.
It is a tool just like I've said
(45:54):
about lots of other things, right?
It's evolving all the time.
It can be used for beneficial purposes.
It can be used for detrimental purposes.
It can cause unintended harm.
It can be used for help.
The impact of AI depends heavily on how
(46:14):
it's developed, implemented, and it's used by individuals
in different organizations, so I think that's why
people today have an obligation and a responsibility
that if you're going to use AI, you
understand the right way to use it, you
don't just go buy a drill and say,
oh, gee, I hope I know how to
use it, right?
You actually make sure that you know how
(46:35):
to use that drill before you start using
it because you could hurt yourself or possibly
hurt someone else, and I think that becomes
a big issue, so viewing AI as a
tool rather than inherently good or bad is
hopefully a good way for you to understand
its potential and the importance of human agency
(46:56):
in shaping its future, every piece of technology
that we have has the potential to be
good or to be bad, and the people
that are trying to get something out so
quickly because they're trying to make a buck
off of it are the ones that are
(47:16):
probably not concerned about whether the technology is
going to help someone or whether it's going
to hurt someone, right, I think that's a
big concern for a lot of people.
That we come in contact with.
They don't realize what something means or what
(47:37):
something doesn't mean, and I think that's something
that a lot of people will wrestle with
for, let's say, a very long time, okay,
it's just kind of the way things are,
and so we know right now that AI
(47:58):
has a lot of potential.
We all know that, and everybody thinks that
if they have AI in their back pocket
that they can just do anything they want,
well, they can do anything they want before
AI, you have to realize AI makes a
lot of mistakes.
(48:18):
It is not perfect.
This is probably the most important thing I
want to share with you today is that
AI makes mistakes, all right, and we don't
often catch these mistakes until, well, they're coming
out of our pocket.
You know, companies are using AI for so
(48:40):
many different things, using it to create art,
they're using it for security, right, they're using
it for analysis of food compounds, of knowing
calorie count, things like that, healthy stuff, right,
they're using it for all those type of
things.
(49:02):
You know, it's something how things like this
are going on, and so what I want
to tell you about AI is that AI
is still a big problem with ADP, yeah,
AI is a big issue with ADP and
(49:23):
what they call it, with Workday, it's a
problem.
ADP's Workday, I believe, I'm not sure how
their case turned out, but they got sued
12 months ago, so what is the response
(49:44):
after ADP got sued for Workday?
I mean, ADP was never more than just
a payroll company, but now they're just trying
to get into everything, why?
Because they see it as a potential, okay,
facing lawsuits alleging it's AI powering tools or
discriminating against job applicants, and I see it
(50:07):
too, I have friends that are trying to
get jobs and they can't get hired, and
the system is discriminating against them.
So, they had one that happened, I think
it was in May, a federal judge just
allowed a jobs applicant's lawsuit against Workday to
move forward as a nationwide class action ruling
(50:28):
that the company's AI-powered hiring tools have
had a discriminatory impact on applicants over age
40.
The May 16 decision is a major development
in the Mobily versus Workday, one of the
country's most closely watched legal challenges in the
use of artificial intelligence in the employment decision
sector.
(50:48):
While this age discrimination case is still in
its early stages, the ruling serves as a
warning to employers and AI vendors alike that
they can be held accountable for algorithmic screening
tools if they disproportionately harm protected groups.
I don't just mean age, it could be
race, religion, sexual orientation, creed, color, it could
(51:11):
be any of those things.
And so, this is interesting.
So, Derek Mobily, who was part of this
class action, filed this claim in the California
federal court, and Workday asked to have the
case dismissed since it wasn't the employer making
the employment decision.
(51:32):
That's a cop-out.
After over a year of procedural wrangling, the
California federal judge gave the green light for
Mobily, the continuous lawsuit that was in July
2024.
In February, Mobily sought permission to expand his
age discrimination claim to a national action so
that millions of other applicants over the age
of 40 could also join him.
(51:54):
But in the decision from Judge Rita Lynn
of the U.S. District Court of Northern
District of California, found the allegations cleared the
legal bar to proceed collectively, and nothing to
the case centers on a common question.
And the whole issue is, is Workday, AI
recommendation system, is it despairing impact on applicants
over 40?
(52:15):
Is that happening or not?
And so, her decision is largely based on
that.
And a crucial distinction is the area of
the law comes under political pressure.
But is it despair impact under impact?
Yes.
And that's part of the reason that this
ruling is so important for employers.
President Trump signed an executive order just recently
(52:36):
directing federal agencies, including the EEOC, to eliminate
enforcement based on disparate impact theory.
That move will almost certainly reduce government led
investigations into algorithmic discrimination for the foreseeable future.
But it doesn't affect private litigation like the
Workday case.
It might even spur state agencies to take
(52:57):
up the charge and seek out more disparate
impact claims on their own.
That means we're likely to see fewer or
no cases filed by the EEOC or the
Department of Justice under this theory.
More state agencies claim private class action lawsuits
and opt in lawsuits or target AI tools
on disparate impact routes.
And they should.
(53:18):
The lawsuit is one of the first major
court challenges to use algorithmic hiring tools under
the federal employment discrimination laws.
It highlights several risks for employers using AI
driven systems and vendor tools may create legal
exposure if they disproportionately reject applicants and protected
(53:38):
cases.
Courts may treat screen systems as unified policy,
even when different employers use the tools differently.
Individualized offenses, qualifications or interview rates, et cetera.
What's next?
Parties must meet and confer and find out
and propose a plan.
We need to audit vendors, retain human oversight,
(53:59):
document criteria, monitor for disparate impact, get your
governance in order, stay tuned into legal shifts,
and most importantly, guys, monitor the developments and
provide up to date information directly to your
inbox so that you can actually know what's
going on with one of them is the
Fisher Phillips insight, right?
(54:20):
There's lots of them out there, but there's
also a lot out there, ladies gentlemen, that
are not really giving you true.
They're trying to sell you something.
I can't tell you how many out there
are not really trying to say the truth.
I'm going to share one with you right
now, so one of the platforms starting with
an L ends with an N owned by
(54:40):
a company that starts with an M that
most people don't know, and they are basically
taking data that was there years ago.
Even though you removed it, it's still coming
up in people's searches.
Why?
Because they haven't properly removed it.
And I think that scraping data from a
(55:02):
website should be illegal.
I mean, like that's wrong, but so many
people do it and there's no fines.
There's no nothing.
I got a problem with that.
When we allow something to keep continuing and
(55:26):
we don't set a precedent, so we don't
punish it, people are going to keep doing
it.
People are going to keep saying, well, gee,
you know, I'm going to keep doing this.
Yeah, but we're saying it's okay, but it
really isn't right.
We're, we're, we're saying that it's okay, but
it really isn't okay.
(55:47):
And so I know that this is important
for you to realize right now that AI
is going to shake up the way the
world is.
Not just today, not just tomorrow, not just
next month, not just this year, but even
next year and into the next century, AI
(56:07):
is going to grab data so fast that
people aren't ready for, right?
What are people's biggest fears about AI?
It's a good one.
(56:27):
Job displacement, number one, bias and discrimination, loss
of control and ethical concerns, misinformation, deep fakes,
privacy violations, loss of the human connection, right?
You're calling places right now, whether it's a
telephone provider, like, oh, I'm going to use
(56:48):
the AI system to help you.
Well, I don't want you to use the
AI system.
I want to go to a live person.
And you have to basically either do nothing
or you get forced to use the system.
Like that just seems unfair to me.
It really does.
We have covered a lot this evening, ladies,
(57:08):
gentlemen, and I hope that you'll have an
understanding of where things are going.
We'll keep following these stories because I know
ladies, gentlemen, you guys want to know what's
going on.
And so do I.
And I want to give you the truth
about what's out there.
Not what you want to hear.
I want to give you what's going on
there.
You know how I've state things for quite
a long time.
(57:29):
I tell you what you need to know.
You might not like what I'm going to
say, but I'm going to tell you exactly
the way I see it.
I'm not going to tell you the way
the news reporters are showing it to you.
I'm going to tell you the way it
is right in business or the way it
is right in our life today.
And too much of our world is biasing
(57:49):
the news.
Being a journalist, videographer for many years, that
hurts me.
I think people need to understand what is
going on and why things are going on
and our responsibility and accountability.
I mean, that's the most important thing, guys.
We got to be accountable.
We got to be responsible and we've got
to develop things for the greater good of
(58:11):
all concerned.
Ladies and gentlemen, I'm John C.
Morley, serial entrepreneur.
Check out BelieveMeAchieve.com.
And I'll catch you real soon.