All Episodes

The boundary between technological innovation and absurdity blurs in this fascinating exploration of AI's growing influence on our lives. We kick off with Google's latest accessibility features for Android and Chrome, where Gemini AI now helps visually impaired users understand images and captures not just what people say, but how they say it—recognizing that a simple "no" versus an emphatic "NO!" carries vastly different meanings.

But the conversation takes an unexpected turn when we reveal perhaps the strangest AI story yet: a Greek woman who divorced her husband of 20 years based solely on ChatGPT's interpretation of coffee grounds. Without confronting her spouse, she filed papers after the AI claimed he was having an affair with a younger woman. This bizarre incident perfectly illustrates what our guest expert Phil Hennessey later emphasizes—AI systems can "hallucinate" convincingly false information, making critical human oversight essential.

Cryptocurrency exchange Coinbase demonstrates a refreshingly aggressive approach to cybersecurity after experiencing a major data breach. Rather than paying the hackers' $20 million ransom demand, they've established a bounty of the same amount for information leading to the identification of those responsible. It's an innovative counter-strategy that could change how companies respond to cyber threats.

Phil Hennessey delivers a masterclass in understanding large language models, explaining that these systems don't actually memorize information like humans but create statistical word maps through neural networks. His insights culminate in a powerful warning about over-reliance on AI potentially eroding our critical thinking skills—coining the term "Human In The Loop" (HITL) as the necessary safeguard against technological overreach.

From Waymo's ongoing self-driving challenges to the latest iPhone 17 rumors, we round out the show with our signature whiskey tasting of Blanton's Gold Edition bourbon. Join us for this thought-provoking journey through the promise and perils of today's rapidly evolving technological landscape—where sometimes the most important lesson is knowing when not to trust the machines.

Support the show

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:20):
Thank you, pull up a seat, raise a glass with our
hosts as we spend the next hourtalking about technology for the
common person.
Welcome to Tech Time Radio withNathan Mumm.

Speaker 2 (00:34):
Welcome to Tech Time with Nathan Mumm the show that
makes you go hmm.
Technology news of the week theshow for the everyday person
talking about technology,broadcasting across the nation
with insightful segments onsubjects weeks ahead of the
mainstream media.
We welcome our radio audienceof 35 million listeners to an
hour of insightful technologynews.
I'm Nathan Mumm, your host andtechnologist, with over 30 years

(00:54):
of technology expertise.
Our co-host, mike Reday, is inthe studio, of course, and he's
the award-winning author and ourhuman behavior expert, and he's
going to love today's episodebecause it's all about AI, so
he'll be excited about that,right, mike?

Speaker 3 (01:07):
Yes, all right, there you go.

Speaker 2 (01:09):
We're live streaming the NRC on four of the most
popular platforms, includingYouTube, twitchtv, facebook and
LinkedIn.
We encourage you to visit usonline at tech time radiocom and
become a patron supporter atpatrioncom forward slash tech
time radio.
We are all friends fromdifferent backgrounds, but we
bring the best technology showpossible weekly for our family,

(01:29):
friends and fans to enjoy.
We're glad to have ODR producerat the control panel today.
Welcome everyone.
Let's start today's show.

Speaker 1 (01:40):
Now on today's show.
Now on today's show.

Speaker 2 (01:44):
Today we dive into the mysteries lurking beneath
the surface.
Somewhere in AI, whispering awarning that would shatter a
marriage.
But was it forethought orsomething far more unsettling?
Elsewhere, a breach has leftsensitive data exposed in the
shadows, raising troublingquestions about who might be

(02:09):
watching.
Google quietly rolls out a newAI and accessibility features,
hints at somethingtransformative, but could this
be innovation or not?
Meanwhile, phil Hennessey stepsinto the conversation to unveil
the hidden works of largelanguage models, the silent
architects of artificialintelligence.
Finally, whispers of the nextiPhone grow louder.

(02:29):
Yet only one true secret isknown, and we'll be talking
about those on the Nathan Nugget.
Is that why you're whispering?

Speaker 3 (02:37):
That's right.
This is a mystery.
It was a mystery intro itwasn't a mystery.
It was just you sounding likeyou had laryngitis.

Speaker 2 (02:45):
Okay.
In addition, we have ourstandard features, including
Mike's mesmerizing moment, ourtechnology fail of the week and
a possible Nathan negative, ofcourse, our pick of the day,
whiskey Tastin To see if ourselected whiskey pick is zero
one or two thumbs up by the endof the show.
But now it's time for thelatest headlines in the world of
technology.

Speaker 1 (03:04):
Here are our top technology stories of the week.

Speaker 2 (03:07):
All right Story number one Google rules out a
new AI and accessibilityfeatures to Android and Chrome.
Let's go to Lisa Walker formore on the story.

Speaker 4 (03:18):
Last week, google announced new AI and
accessibility features forAndroid and Chrome.
With the updated TalkBack,android's screen reader users
can now inquire about thecontent of images using Gemini.
For instance, if a friend sendsyou a photo of their new guitar
, you can receive a descriptionand ask about its brand and

(03:41):
color.
Additionally, descriptions arenow available for your entire
phone screen.
This means that while shoppingin an app, you can ask Gemini
about the material of an itemyou're interested in or check if
there is a discount.
Now that's an AI everyone cansupport Over to you in the
studio.

Speaker 2 (04:01):
Is it, is it really?
Well, besides what Lisamentioned, google also announced
that it's updating theexpressive captions that's,
android's real-time captionsfeatures to use this AI to
capture when someone's sayingand how they say it.
So this is, I guess, a veryimportant aspect is I can say no
or I can say no, and they havetwo different meanings, right,

(04:23):
so each of those mean somethingquite a bit different.
A no could just be a simple no,whereas a no could be very
emphasized and say with meaning,and, and so the ai now is
taking this into effect tocapture this and be able to
translate this back.
Now.
Just think of you were havingexpressive captions and you're a
sports announcer and you sayit's an amazing shot or it's

(04:45):
like a way to go hit home.
Run the inflection, thewhistling, the clearing of the
throat.
There you go.
Clearing the throat is allimportant aspects.

Speaker 3 (04:56):
Where's my caption there?

Speaker 2 (04:58):
that's available now.
Google's also making it easierthis is the big win right here
to access PDFs on Chrome,because it used to always go on.
You open a PDF and normallyopen up.
Like.
The big win right here Toaccess PDFs on Chrome, because I
used to always go on.
You open a PDF and normallyopen up, like the Edge browser,
if you're on a Windows PC and ifit didn't, I have a PC with
Windows and then you would openup in Safari and normally have
the PDF default.
Now Chrome has a PDF defaultreader and it allows Chrome to

(05:20):
recognize these types of PDFs,allows you to highlight, copy
and search for text like anyother page you use on your
screen reader to find them.
This is thanks to the newintroduction of optical
character recognition, which isOCR, and Google now has that
standard on all their Chromebrowsers.
So if you're going to open up aPDF moving forward, you're
going to now be able to dosearches, be able to edit it all

(05:42):
for free in the Chrome browser.
All right, well, there you go.
Google is now trying to havethese features be important.
They also have page zoom, theyhave a couple other things, and
they are spending time right nowreally hitting the
accessibility options andgrowing those out for a smaller
demographic, but a verypassionate demographic of

(06:04):
individuals that need to use theweb, that want to use the web
and have the full features thatare available.
All right, see, now that's apositive AI thing, right, you
know?

Speaker 3 (06:14):
yeah, that's a positive AI thing.
I don't know how positive it is.
It has to work.
So yeah, I know I got Gemini onmy phone.
What was Gemini you mentioned?

Speaker 2 (06:25):
Gemini, yeahini, gemini is the Google, so that
Gemini is Google, is that okay?
That came on my phone that'sprobably because it's a Google
based phone.
It's an Android phone so that'swhat you have now okay alright
well yes, let's talk about thisnext story if there wasn't any

(06:46):
more stupid ways that ai couldbe used for stupid people.

Speaker 3 (06:50):
This has to be the best okay so explain to this oh
this what do we got?

Speaker 2 (06:57):
what do we got going on?

Speaker 3 (06:58):
here.
You shouldn't trust generativeai to come up with the right
answers all the time.
Okay, even the tools ondisclaimers warn you to check
for the factual accuracy.
They're certainly not renownedfor their ability to read tea
leaves.
However, one woman put so muchfaith in ChatGPT's divination

(07:21):
skills that she divorced herhusband of 20 years after it
interpreted what uh, the remainsof his coffee mug as signs of
infidelity.

Speaker 2 (07:30):
Okay, hang on here just a second yeah, so, so so
reading tea leaves, yeah, yeah,the coffee so coffee grounds.
So I have some coffee groundsevery once in a while when you
do coffee well clearly youshouldn't allow chat gpt to use
them for divination purposes,okay would anybody even do that?

Speaker 3 (07:50):
I wouldn't know that it could be done, but apparently
it can okay, tell me more this,this sounds okay.
Yeah, rather than using chat gpsimage skills to create studio
ghibli style pictures, a greekwoman decided to experiment with
the trend of ai tassiographyokay, which is a form of

(08:12):
divination that interpretsshapes left by tea leaves or
coffee grounds or wine sedimentafter a cup is drink okay.
She uploaded a photo of herhusband's greek coffee grounds
and asked AI to interpret them.
The chat spot's interpretationwas that the husband was
fantasizing about having anaffair with a younger woman
whose name began with E, andthat he was destined to begin a

(08:36):
relationship with this person.
Oh boy, I don't know how theheck did she come up with?

Speaker 2 (08:40):
this.
I don't know.
I don't know why.

Speaker 3 (08:42):
I don't know why.
This is why I hate it.

Speaker 2 (08:46):
Okay, all right, there's gotta be, it's gotta be,
more to this story, right?
It's also?

Speaker 3 (08:49):
yes, okay, more to it , oh man the wife also uploaded
a photo of the remains of herown coffee ground, and chat
gpd's interpretation was evenmore damning for the husband.
They claimed he was alreadyhaving an affair with this other
woman and she was trying todestroy the wife's family.
Okay, yeah, okay so this, thisenterprising young woman or

(09:12):
older, I don't know how old shewas.
I don't even know if this istrue or not.

Speaker 5 (09:16):
It is true, though.

Speaker 2 (09:17):
No, it is true.

Speaker 3 (09:17):
The woman then did what any other normal person
would do she went and filed fordivorce without telling her
husband.

Speaker 2 (09:23):
All right would do.

Speaker 3 (09:26):
She went and filed for divorce without telling her
husband.
All right, okay, all right.
So I guess she appeared on agreek morning show.
The husband said that the wifewas often into trendy things and
believed getting chat gp toread the coffee grounds would be
fun, he said.
He laughed it off as nonsense.
But she didn't.
He told me to leave, informedour kids about the divorce and
the next thing I knew I wasgetting a call from her lawyer.

(09:47):
Oh man, the man refused toagree to a mutual separation,
naturally, so he was served withdivorce papers.
Three days later His lawyer ispushing the seemingly obvious
argument that claims made by anAI have no legal standing,
especially when it comes toreading tea leaves.
It says anybody reading tealeaves.

Speaker 2 (10:06):
Is the AI better than the gypsy lady down the street
to read the tea leaves or orwell, I don't know.

Speaker 3 (10:11):
Clear.
Clearly this woman thought sookay, all right seems that the
woman has a penchant forbelieving in mystical guidance.
A few years ago she visitedastrologer and took a whole year
for it, or accepted none of itwas real.
What?
Not only is the whole situationa sad indictment of how much
faith some people put in ai andtassiography, but it's also been
pointed out that the readingstraditionally look at the foam

(10:34):
patterns, the swirl and thesaucer, not just the left.

Speaker 2 (10:38):
So so the way to even do this was the wrong way.
This is this is?

Speaker 3 (10:41):
This is three things that I just.
I fundamentally don't have aproblem with the technology.

Speaker 7 (10:50):
Okay, that makes sense.
We've established that.

Speaker 3 (10:53):
Fundamentally, the technology is usually not the
problem.
It's the people using it that'sthe problem.

Speaker 2 (11:01):
This lady seems to have more problems than
technology.
Yeah, I'm trying to be verynice.
I'm sure she's a nice lady.

Speaker 3 (11:10):
I don't know.
This is stupid.

Speaker 6 (11:13):
This is like stupid criminals you know how they get
caught.

Speaker 3 (11:16):
Yeah, this is just one of the most moronic things
I've ever heard.

Speaker 2 (11:19):
It's funny.
They would think this would belike in Alabama or someplace
like you know, but no.

Speaker 3 (11:25):
What are you trying to say?
What are you trying to sayabout Southern people?

Speaker 2 (11:28):
You were just talking .
No, I'm not saying we havegreat, we love our broadcast in
the Southern nations.
Please, that's not the case.
What I'm saying is.

Speaker 3 (11:35):
I'm from the South.

Speaker 2 (11:36):
Okay, I'm just saying that normally when you see
things that go on in these copmovies, like you, were talking
are shows.

Speaker 3 (11:43):
I don't understand why anybody would use that.

Speaker 2 (11:47):
How the heck did the AI know anything about a TV?

Speaker 3 (11:49):
Well, clearly there's some sort of tassiography
setting.

Speaker 2 (11:53):
No, there's not.
No, there's not.
I'm an advanced user.

Speaker 3 (11:56):
Are you?

Speaker 2 (11:56):
sure.

Speaker 6 (11:57):
Yes, I'm an open AI advanced user.

Speaker 2 (11:59):
No, no, no, there's no tools in open.

Speaker 3 (12:03):
AI.
Well, clearly this is a trend.
She said it was a trend.
Ai tassiography.

Speaker 2 (12:11):
So now all of a sudden, everybody's going to
start getting divorced becausethey're going to put stuff in
the chat GPT, I don't know.

Speaker 3 (12:15):
I have no idea.
I am not the right person totalk about the marriage and all
of its things, but I think ifthis happened, I think this guy
kind of dodged a bullet.
Actually, he may want to getout, is that?

Speaker 2 (12:32):
what you're saying?
Yeah, I don't.

Speaker 3 (12:33):
I think he should go ahead and grant her the divorce
and go on with his happy life,because this is.

Speaker 2 (12:39):
I can't just think of that last year when he had to
tell the lady whatever she gotfrom the other person was
incorrect.

Speaker 3 (12:45):
Or the astrology.
Oh boy, okay, all right.
The moon is in my hammies.

Speaker 2 (12:49):
Okay, story number three Crypto security exchange.
Coinbase has significantsecurity breach and it's
fighting back, so let's talkabout it.
This is kind of cool.
This is a highlight story forme.
Not that Coinbase got breached,but Coinbase, a cryptocurrency
exchange.
This is the one on Tech TimeRadio that we recommend people
to use it's yours right, Yep,that's what I use.
A lot of people use that it hasover 100 million customers.

(13:10):
It's disclosed that cybercriminals, working with a rogue
support agent, stole customerdata and demanded a $20 million
ransomware not to publish thestolen information.
So this the disclosed cybercriminals worked with people
within the company or supportagents within the company to
steal the deals, so it was aninside job, to say the least.

Speaker 3 (13:31):
Well, isn't it always ?

Speaker 2 (13:33):
99% of the time it is .
It's very rare that youactually have somebody going on
and pilfering into the companyoutside in a.

Speaker 3 (13:41):
CIA van.
It's not the technology, it'sus.

Speaker 2 (13:44):
Well, listen to this.
The company said it will notpay the ransom, but instead
established a $20 million rewardfund that leads to help find
the hackers and who coordinatedthis attack.

Speaker 3 (13:54):
Now, this is the way you do it Wanted dead or alive
$20 million.

Speaker 2 (13:57):
That's right.
You know what, if you can findout who this person is, if you
can put in some eye drops intotheir coffee and they don't read
the tea leaves, then they won'tbe in existence anymore.
You got yourself, maybe $20million from Coinbase.
Now disclosure comes after thecriminals behind the breach
emailed Coinbase on May 11thdemanding the $20 million ransom

(14:18):
to prevent public disclosure ofstolen information.
Now the threat actors managedto steal a combination of the
personal identity informationand up to 1% of Coinbase
customer base around 1 millionusers.
They couldn't steal thecustomer's private keys or
passwords and couldn't accessCoinbase Prime accounts and hot
and cold wallets belonging tothe affected customers of the

(14:38):
crypto exchange.
In a filing with the USSecurities Exchange Commission,
the SEC, the company says thedata stolen in this incident
includes name, address, phoneand email which you can get from
any private medical breach thathappens to you, since it
happens all the time to theseprivate medical companies.
Masked security, social security, which is the last four digits

(15:01):
only Not very helpful, but okay.
Masked bank account numbers andsome bank account identifiers,
government ID and images,including driver's license and
passport.
Account data, balance snapshotsand transaction history.
And limited corporate data,including documents, training
materials, communicationsavailable to support agents.
So, like the help desk supportagent information, cyber

(15:24):
criminals bribed and recruited agroup of rogue overseas support
agents to steal the Coinbasecustomer data to facilitate
social engineering attacks.
There you go, the socialengineering attacks, it's the
humans that got attacked.
The insiders abused theiraccess to customer support
systems to steal the accountdata for a small subset of
customers.
Coinbase said in a Thursdayblog post so let's talk about

(15:46):
that.

Speaker 3 (15:47):
I think everybody should go Old West and do this.

Speaker 2 (15:49):
Let me just tell you on the dark web right now there
are people that are actuallyworking to find out who the
people were.
Have the actual tracking ofinformation.
If you want to stopcybersecurity, this is the way
to do it, because now you havethe hackers on the dark web that
are trying to figure out whodid it, verify that with IP
addresses, and they're going todo way better job of tracking

(16:11):
down who did this for $20million Because it's easy money.
All I got to do is start goingback and tracking the IP, who
did this, who posted somethingabout it.
So all you got to do is be asnitch for 20 million versus all
the technology that you need todo.
For the rest, this Coinbasemethod is the method that I
think may help stop some cybercriminal attacks.

Speaker 3 (16:34):
All right, let's hope .

Speaker 2 (16:35):
I hope so.

Speaker 3 (16:36):
I think that's an interesting response.

Speaker 2 (16:39):
I liked it All right.
Well, that ends our.
So you know what?
They must have an actual ITperson there that knows what
they're doing.
All these companies when theyget breached, and that PR comes
on out and they say and you justknow that FireEye and these
horrible companies out there goon in to give them IT security.
They don't know what they'redoing.
This this, this, this there'syour plug, there you go.
This, at least, is a companythat has somebody in it that

(17:09):
says you know what, screw it,we're not going to pay the money
, we're going to go after themall right.
That ends our technologystories of the week.
Moving on, phil hennessey willbe up next breaking down ai and
what an llm, a large languagemodel, is as we talk about it on
the show all the time.
Now we get the debrief of whatit is.
That's what phil does best.
He breaks things down, he tellsyou never to get in an
autonomous vehicle in the snow,and many other different things.
Now buckle up as we drive 88miles per hour into our next
segment.
See you after this commercialbreak.

Speaker 6 (17:30):
Looking for custom glass solutions for your next
commercial project?
Hartung Glass Industries isyour trusted partner in custom
glass fabrication.
For over 100 years, hartung hasdelivered proven manufacturing
expertise, comprehensive productofferings and dependable
service and quality.
From energy-efficient facadesto custom shower doors, we

(17:53):
create glass solutions tailoredto your project needs.
With eight facilities acrossthe US and Canada, we combine
national expertise with a localtouch-insuring faster service
and unparalleled customer care.
Hard Tongue Glass Industries,where quality meets innovation.
Visit hardtongueglasscom tolearn more.

Speaker 2 (18:19):
Welcome back to Tech Time with Nathan Mumm.
Our weekly show covers the toptechnology subjects without any
political agenda.
We verify the facts and we doit with a sense of humor, in
less than 60 minutes and, ofcourse, with a little whiskey on
the side.
Today, mark gregoire, ourwhiskey connoisseur, is in
studio.
Mark, what have you?

Speaker 5 (18:35):
chosen for us today.
Today we are drinking blanton'sgold edition.
Oh so, from buffalo trace'swebsite, the world's first
single barrel bourbon wascreated in 1984 by Elmer T Lee,
named after former distillerypresident, colonel Albert B
Blanton.
The gold edition was the secondversion created after the

(18:57):
original 93 proof.
This 103 proof.
Blanton's gold is very limited,but a favorite among discerning
bourbon aficionados.

Speaker 3 (19:06):
Discerning, discerning, thank you.
What just happened?

Speaker 2 (19:10):
What just happened.
I'm here to always correctgrammar.

Speaker 3 (19:13):
Did Nathan correct the grammar of Mark?
I'm a theodorus.

Speaker 2 (19:17):
What the?

Speaker 3 (19:18):
Did you have chat?
Gpt readers.

Speaker 5 (19:20):
No, I did not have that.
Okay, keep on going, All right.
Hints of no, I did not havethat.
Okay, keep on going, All right.
Hence a spicy rye and tobaccoon the nose, followed by caramel
and honey, dark fruit andcitrus notes.
The palate has the same complexaroma, with rye, tobacco and
honey.
Oak and vanilla contribute toan extremely long and harmonious
finish.

Speaker 2 (19:38):
Now this is.
This is like a $300 bottle,isn't it?

Speaker 5 (19:42):
Well, let's talk about it.
It's from Sazerac Company,which is the Buffalo Trace
Distillery in Frankfort,kentucky.
It's straight bourbon.
It's non-age stated, it is 103proof.
The match bill is undisclosed.
Msrp is $105.
Oh, it's $105.
Now it goes on the secondarymarket for $210.
Okay, and if you find it atliquor stores that are marked up
, it could be upwards of like upto 300.

(20:04):
This is good stuff.

Speaker 2 (20:07):
I like this already.

Speaker 3 (20:08):
Yeah, and it helps his grammar improve.

Speaker 5 (20:10):
It did.
I appreciate the correction.
I was struggling.
There Were you.

Speaker 3 (20:16):
I'm always here for you, you haven't had enough
liquor.

Speaker 5 (20:20):
I know I haven't had my first sip.
I was going to drink a littlebefore I started I was like no,
I want to make sure to readthese big words right, and I'll
start drinking now.
Okay, now, if you do startdrinking out there, drink
responsibly.
Heaven can wait, that's right.
Thank you so much, mark.

Speaker 2 (20:38):
With our whiskey tasting completed, let's move on
to our feature segment today.
Today, we're driving into anexciting technology called large
language models, or LLMs.
They're quietly powering yoursmartphones, computers and smart
speakers.
But what exactly are they andhow do they affect our lives?

Speaker 4 (20:59):
Let's start our next segment.
Welcome to the AI segment CatAbout Cat with our tech time
guest, my favorite humanoid, mrPhil Hennessey.

Speaker 2 (21:08):
All right, phil, welcome to the show.
Tell everybody a little bitabout yourselves, because you've
been off for a little bit, alittle sabbatical.
We're glad to have you back onthe show.
Tell everybody that may belistening that hasn't heard any
of your episodes a little bitabout yourself.

Speaker 8 (21:21):
Sure, I'm an engineer and right now, senior vice
president for a company insuretech company that we use AI to
detect risk for factories andwarehouses and help them get
better and prevent injuries.
So I'm familiar with AI andI've done robotics in the past

(21:41):
and deployed large safe cityprojects for $1 billion projects
.
So a lot of change management,a lot of technology
understanding and systemsintegration work I've done Well
thank you All right Now.

Speaker 2 (21:57):
We love having you on our show, right?
Because with you on our show wealways get to talk and you kind
of break down the stuff for theeveryday person type of deal.
So, phil, let's start verysimple, all right.
What exactly is a largelanguage model?
And kind of explain this foreverybody that would be
listening.

Speaker 8 (22:16):
Sure.
So if people are familiar withChatGPT, chatgpt is a large
language model.
Gpt is a large language modeland basically it's a very
powerful AI program that is usedto understand, generate,
respond using human language andactually it's also getting
individuals.
That's how we did this Treading thing and I want to talk
about that a little bit.
But you know it's these modelsread enormous amounts of text

(22:42):
from basically scrape the entireweb, any book that's been
digitized blogs, websitearticles, you know, facebook,
whatever right.
They're ingested all thisinformation and then they're
able to use that information indifferent ways, and one of the
ways they can do that is thatthey can then take a shot at
coffee ground coffee grounds,coffee grounds, coffee grounds,

(23:06):
and make a determination on that.

Speaker 3 (23:07):
The term is cassiography.

Speaker 8 (23:10):
Yeah, you could say that word.
I haven't drank yet.
But that's all you.
Cassiography, all right.

Speaker 2 (23:16):
All right.
So how do large language modelsactually work?
How do the models understandand produce languages?
Because they have it not justin English, of course, they have
it in French and German, allthese different areas.
How do we go about actuallyhaving these things work?

Speaker 8 (23:30):
All right.
So there's, we use neuralnetworks, or that is the basis
for the computation behind it,which is trying to mimic the
human brain as much as possible,human brain as much as possible
.
And then in the programs itself, they have these types of
programs called transformers orself-attention, allowing the
models to then understand eachword, so it understands each

(23:55):
word, and each self-attention orattention block understands one
word and they link and worktogether to build out that
sentence and then that essay orthat paragraph and that essay or
that resume, so each oneunderstands how one word or two
words work together.
And then that's why there'sbillions of these things that
are happening a lot.
That's why you need these huge,huge data centers now and

(24:18):
you're getting, you know,microsoft, you know, trying to
turn on a nuclear power plant tojust power an AI data center
right now, because of all thosebillions of parameters that
they're doing.
So imagine you know these hugedata centers and they're
basically just producing allthis mathematical computation,

(24:39):
billions and billions ofcomputations.

Speaker 2 (24:42):
So it's kind of have you ever seen, mike?
Have you seen that movie, eagleEye?
Yeah, okay, you see all thosebig server databases.
So I guess, phil, the nicestway to say is that, like some of
the movies illustrate, thereare these huge data centers,
tons of information that they'restoring.

Speaker 3 (25:00):
So what we have is acres and acres and acres of all
these data producing gizmos,right yeah, that are trying to
mimic the brain in a singleperson walking around, correct?

Speaker 8 (25:13):
Yeah, and think about that, Mike.
Think about that what we havein our head.
We need acres and acres ofcomputers to not even get to
where we are.
Yet Right.

Speaker 2 (25:24):
Even the lady that decided to divorce her husband
probably has more yeah, she has.

Speaker 3 (25:28):
She has more brain computational power than the,
than the chat she doesn't use it, right.

Speaker 8 (25:35):
But hey, you know but there was probably a blog on
deviation somewhere out thereand chat gpt said okay, here's
my examples of coffee grounds.

Speaker 3 (25:42):
and this is what it says yeah, that's right you know
if I were looking at this moreas a professional, like I should
.
There's probably other stuffgoing on in that marriage that
may warrant that.
I mean, just that idea is justall right.
Shoot me now.

Speaker 2 (25:58):
So can you give us a simple everyday example of what
a large language model is?
So if I'm driving aroundlistening to the show, I got the
idea that there's huge datacenters, all this stuff is being
there.
What would be an example thatpeople would kind of resonate
well with?

Speaker 8 (26:13):
Well, so if you were reading, so, amazon and they
have all the different reviewsfor a product and now what
they're doing is they're givinga summary of all those reviews
up top in the review section.
That's a large language model.
It's doing that summary so youdon't have to read through and
scroll through all the reviewsanymore.
It's all there for you.

(26:34):
Or, if we've talked aboutbefore, I want to, I want to, I
want to go ahead and customizemy resume to one specific job
posting.
I can get my resume, I can getthe job posting.
I can upload together, give ita prompt, say, hey, I want to go
ahead and make my resume morelike this job description so I
can get an interview.

(26:54):
So just simple things like thatthat we can use.
So it's being and you know,apple's including it now and for
journaling and everything else.
You create your journal.
You know it looks at everythingyou've done during the day and
it'll start helping you promptfor journal creation, things
like that.
So it's being intrusive, uh,all over the place now.
I mean in ways that youwouldn't even think of.

(27:14):
That.
You're starting to see it andwe're.
So everybody is probably seeingan llm during the day if
they're online and they don'teven know it right now.

Speaker 3 (27:24):
I like the fact that you said intrusive.

Speaker 8 (27:28):
It's intruding.

Speaker 3 (27:29):
It's intruding on everybody.
It's intruding on my life inways I don't necessarily want it
to.
Okay, that's you know.
It's intruding on that poorguy's life.

Speaker 2 (27:38):
Yeah, just think.
What's interesting now is youcan't really opt out of a lot of
these user agreements, right?
I buy myself a Siri device, Ibuy myself an Alexa device and
guess what?
All of a sudden, I need to makesure that I allow the terms of
agreement so I can use that itemI've bought, and every single

(27:59):
time they decide to writesomething new in there guess
what?
I have to accept it and theyget more and more access to more
and more of my data, yeah, allyour data.

Speaker 8 (28:07):
And that's where the training data comes in is
they're using this data, allthis information that we
generate, to to?
They're allowed to use our dataanonymized, to help retrain and
modify the models to make themmore human-like.

(28:30):
So that's what's happening.

Speaker 2 (28:32):
Let's talk about the models.
How do they remember all ofthis information?
They don't.

Speaker 3 (28:38):
Well, they have to.
They don't have a memory.

Speaker 2 (28:39):
Well, how do they store this information?
I guess is my next question.
If they're having these modelsthat are there?

Speaker 8 (28:45):
Instead of memorizing like a human does, it creates
embeddings or mathematicalreputations or representations
or statistical analysis.
So remember I talked aboutthese transformers, each kind of
.
They associate different wordstogether, so it's basically a
big data map.
I'll call it, for lack ofbetter words, or a big word map.

(29:05):
It's linking everythingtogether and then it says, ok,
this word should go with thisword in this context.
So I have a certain prompt Iwant and it says OK for this
type of prompt, say it's amedical question, I know these
words kind of work together,work together.
So it's all statistic andsimplification of it is a
statistical analysis of the wordto see what best goes, what

(29:28):
word goes with which, what withwhat word.

Speaker 2 (29:30):
So it doesn't have free thought, right?
I mean, that's the one thingthat everybody thinks about.
All these AIs are going to takeover the world like Terminator
2.

Speaker 3 (29:38):
Well, that was like that Google engineer who thought
it was.

Speaker 2 (29:43):
More like war games.
You know where the computerjust decides to attack because
they're scared and stuff it maytake.
X equals Y, and then all of asudden Y is bad, and so you need
to get rid of that.
But it is not using logicalthinking processes or connected
synapses to create a predictiveinformation that's available to
come on out of your brain at thetime right.

Speaker 8 (30:05):
Well, I would say it's doing a lot of that, but
it's not free thinking, it's notconscious.
So I think the way I wouldrather explain it because it is
taking predictive of hey, thisword should go with this word
should go with this word to goahead and make your essay.
And they're doing the samething now with visual as well,

(30:25):
as they're taking billions ofimages like YouTube, like Google
has YouTube, and they'reingesting all of YouTube's
videos and then into theirmodels and seeing how they can
use that for visualinterpretation, now that you can
do that as well.
So these large-angled modulesare actually becoming what we
call multimodal systems, wherethey can do visual

(30:47):
interpretation as well.

Speaker 2 (30:51):
Alright.
So we're going to have to breakthis into two parts because I
want to do a whole thing just onbusinesses.
So we'll put a little bit of acliff note or a little paper
clip Not clippy though, becauseit's no longer available a
little, put a pin in it.

Speaker 3 (31:05):
Yeah, put a pin in it there we go.

Speaker 2 (31:07):
So tell me though, where do you think large
language models technology isheading um?

Speaker 3 (31:13):
as we kind of wrap up , this can kind of consumer
clearly version of this clearlyhas an exciting frontier in the
new age market um.

Speaker 2 (31:24):
Mike loves this story .

Speaker 6 (31:25):
He's still on his story.

Speaker 2 (31:26):
He's like still like what's going on with the coffee
grinds?
It's not gonna work.

Speaker 8 (31:30):
So, um, I think you're gonna see smaller agents.
Um, a made a large, you'regonna see smaller, what I call
agents, um helping you withemails and and or setting your
reservation for dinner or formaking your doctor's appointment
.
I think you're going to seethat in the next couple of years

(31:50):
, where it's you have, you havepersonal agents helping you with
different things that are goingto be for different types of
support like that that you canuse, for that would be a
basically an executive assistantthat would help you, personal
assistant that would help youwith those types of things.
Think that's where it's goingto go there you go.

Speaker 2 (32:07):
It could be your executive, wally.
Well, there you go.
You know we talked about headedtowards wally man.
We talked about our show, aboutthe visa, ai agents, right?
I mean it's kind of the samething.
So so should we be scared?
I mean last question before webreak should we be scared of ai?
Because I I go to some eventshere as a technologist and I see
some people that are scared ofstuff and then other people that

(32:27):
are not scared of this.
What do you think?
Should we be scared, not scared?
Where are we at?

Speaker 8 (32:34):
Well, I'm going to answer it a little bit different
way.
I don't think you can trusteverything it says right now.
It hallucinates.
So if people so back to Mike'sstory if you're going to believe
everything it says, you'regoing to take actions that may
not be correct for your life,Right?
So you need to have commonsense in reading it, and it's so
good at stating a fact thatthis partially sometimes true

(32:58):
and false.
You really have to be an expertin that field sometimes to
understand it or otherwise.
So there's still going to bequality control and what I would
call human in the loop, but youstill need to make sure you
have your reasoning, yourthinking and you're critical in
using this information, becauseit can hallucinate, it will lie

(33:19):
to you.
If you don't do the rightprompt, it will give you what
you want, the thinks you wantsometimes.
So that's what I'm scared of.
It's like, are we going to getthe right information?
Like my work, everybody uses it.
It's great.
They always ask you questionsfor different things.
But I'm like are we getting theright answers from that?
Are they real?
Are we you know?

(33:39):
Are we checking that answer?
And that's what I'm moreworried about is, are we relying
on it too much, and I saw anarticle the other day that
people that use these models,like ChatGPT, all the time are
losing their critical thinkingskills.

Speaker 2 (33:54):
Yeah, that makes sense.

Speaker 3 (33:56):
We've talked about that.

Speaker 2 (33:57):
All right, Phil, you know what we are going to need
to create a tagline here H-I-T-L.
We need to market this.
We need to trademark this Humanin the loop.
You said that in here.
I think that is going to be thenew definitive terminology that
we have to have in AI.

(34:18):
It'll be called H-I-T-L Humanin the loop.
I think you started it righthere so that you can take credit
of the father of the H-I-Tlmaking sure humans are in the
loop humans are in it.

Speaker 8 (34:29):
I, I like it.
I don't think I created it,though I probably stole it from
somebody else.
But uh, but uh, you know but Ilike it.

Speaker 2 (34:36):
We're gonna just claim it, right here, I'm just
gonna run around saying hittleall the time and you know, if we
say it enough times and we getpublished on YouTube as much as
our show is, then maybe we canbe a part of the algorithm as
the creator.
There we go.

Speaker 3 (34:50):
I love it.
Yeah, then the AI is going topredict the future from your
coffee.

Speaker 2 (34:54):
All right, All right, phil, so we want to thank you
for being a part of the show.
Next time on the show we'regoing to take a look at how
business sees use, llm, largelanguage models and some of the
challenges.
Thank you so much for being apart of our show.
We could not be happier to haveyou back breaking down
terminology and technology forus.

Speaker 8 (35:13):
All right, thanks a lot.
Phil Take care.

Speaker 2 (35:16):
All right, okay, phil does a great job breaking down
technology.
And with that, now let's moveon to Mike's mesmerizing moment.
Welcome to Mike's mesmerizingmoment.
Welcome to Mike's mesmerizingmoment.
What does Mike have to saytoday?
All right, mike, here's myquestion.
We're going back to, kind ofthe first story.

Speaker 3 (35:35):
AI is being.
What was the first story?

Speaker 2 (35:37):
The Google with all the new.
Oh, it's good stuff that wehave All the new, yay, yeah,
okay.
Well, ai is.
I know you're still liking yourtea leaves and coffee ground
stuff, but AI is being used tohelp people.
What areas of technology can AIhelp next with, in your view?

Speaker 3 (35:56):
I don't know why you ask questions like that.
Why do you ask questions likethat?
Why do you want to?
I like it when we havetechnology that can help in,
like the medical field, in inthe field of astronomy, you know
, where it can help predictthings like where a black hole

(36:18):
may be.
I don't really like it when aicomes in and starts doing
certain things like, like philsaid, you know, the more we use
this stuff, the less intelligentwe get.
Yeah, and that's the problem,that's one of the hugest
problems that I have withtechnology and especially ai, is
that we we have we have thisissue with critical thinking.

(36:42):
Obviously, critical thinking isa huge problem for some people,
otherwise they wouldn't getdivorced based on the
predictions of an AI right, yeah, so we have that issue already,
and then we're going to startleaning in to something that's
going to do essentially what thecalculator did for people.

(37:03):
Yeah, to do essentially whatthe calculator did for people.
Yeah, you know, you no longerhave to memorize mathematical
equations because somethingalready can do it for you, and
so we lose that ability to thinkproperly through something.
So, in a future whereeverybody's using chat GPT, what
do you think that's going tolook like?

Speaker 2 (37:22):
It's just going to be everybody looking at their
phone, hey.

Speaker 3 (37:23):
Google.
How do I cook this so that Idon't die?
Oh well, you need to add somelead-based paints, and you know.

Speaker 2 (37:31):
Okay, and you think we'll just start believing it so
much that we're just going tobe like whatever the AI says
we're going to do.
We adapt.

Speaker 3 (37:38):
We are, humans adapt, and one of the problems I have
with AI is that oh, oh, oh, lookat that, your a, your ai just
went off on your phone, my, my.
Ai just just tried to like tellyou something on my on intrude
on my thing here.
That was gemini, that was thatstupid gemini thing.

Speaker 2 (37:59):
Well, that's a.
You got it loaded on yourmachine.
There you go.
I will say this do you knowthat nasa resurrected voyager
ones thrusters after 20 years?
Ai helped them figure out asolution to re-kick that in.

Speaker 3 (38:13):
That's something that I think is cool.
Using it to define tea leaves Idon't think that's cool.
No.

Speaker 2 (38:20):
I don't either, so I do think there's some great
tools.
Now, all of a sudden, you havethese thrusters, working on a
deal that you couldn't for 20years, and the AI actually
figured out.
The simple thing was an overswitched item that they just
rerouted, the switch to adifferent location, turned it
back on, and now all of a sudden, we have this, uh, interstellar
spacecraft and now it'sthrusters and continue so

(38:42):
there's some good stuff about it.

Speaker 3 (38:43):
Yeah, I mean, the big problem is that is that we're
we're going to a lot.
It doesn't.
It's something that mimics thehuman brain.
It doesn't do it well, but weare thinking it does All right.

Speaker 2 (38:54):
All right.
Well, thanks for thatmesmerizing moment.
Up next we have this Week inTechnology, so now's a great
time to enjoy a little whiskeyon the side, as I will be
finishing my glass during thebreak.
You're listening to Tech TimeRadio with Nathan Mumm.

Speaker 3 (39:16):
See you in a few minutes.
Hey, mike.
Yeah, what's up?
Hey, so you know what.
We need people to start likingour uh social media page.
If you like our show, if youreally like us, you could use
your support on patreoncom.
Is it patreon?
I think it's patreon.
Okay, patreon, if you reallylike us, you can, and you say
I'm the english guy patreoncom Ibutcher the english language.
You know, you butcher theenglish okay, so it's all the
time it's patreoncom.
Patreoncom.
If you really like our show,you can subscribe to patreoncom

(39:36):
and help us and you can visit uson that Facebook platform.

Speaker 2 (39:39):
You know the one that Zuckerberg owns, the one that
we always bag on.
Yeah, we're on Facebook too.

Speaker 3 (39:44):
Yeah, like us on Facebook.

Speaker 2 (39:45):
You know what our Facebook page is.
Tech Time Radio.

Speaker 3 (39:48):
Tech Time Radio At Tech.

Speaker 2 (39:49):
Time Radio.
You know what?
There's a trend here.

Speaker 3 (39:52):
It seems to be that there's a trend, and that's Tech
Time Radio.

Speaker 2 (39:54):
Or you can even Instagram with us.

Speaker 3 (39:56):
And that's at Tech Time Radio.
That's at Tech Time.

Speaker 2 (39:59):
Radio or you can find us on TikTok and it's Tech Time
Radio.
It's at Tech Time Radio.

Speaker 3 (40:04):
Like and subscribe to our social like us today we
need you to like us like us andsubscribe.
That's it that's it.

Speaker 1 (40:10):
That's that simple and now let's look back at this
week in technology all right,we're going back to may 24th
1935.

Speaker 2 (40:22):
The first night baseball game was played.
Now the first night majorleague baseball game is played
in cincinnati, the hometown ofthe Reds.
The feeding the visitingPhiladelphia Phillies two to one
.
Now night baseball caught onall around the league very
quickly, except for the team inChicago, as the Chicago Cubs did
not play a home game until 1988.

(40:43):
Yeah, this is very interesting.
The used to always playbaseball games during the day.
They found that when you playit in the evenings people could
go to work and then after workthey can enter, check and go in
and be a part of a baseball game.
Duh, I know.

Speaker 3 (40:59):
More ticket sales.

Speaker 2 (41:00):
That's right.
Well, that was this week intechnology.
If you ever wanted to watchsome tech time history, with
over 250 plus weekly broadcastsspanning our four plus years we
have video, podcast, bloginformation you can visit us all
at techtimeradiocom and watchour older shows and get all of
our whiskey reviews.
We are now becoming the whiskeylibrary of the large language
models of all the Internet forour thumbs up and thumb down

(41:23):
reviews.
We're going to take acommercial break.
When we return, we have Mark'smumble whiskey review.
See you after this.

Speaker 7 (41:29):
Attention all geeks and pop culture enthusiasts, get
ready for the ultimatecelebration of everything geek
at GeekFest West Game Expo.
July 18th through the 20th indowntown Everett Washington.
Join us for three thrillingdays packed with cosmic cosplay,
gaming, tournaments, retromovies and a street fair
brimming with unique vendors.

(41:50):
From the innovative GeektopiaVendor Hall to the Galactic Time
Warp showcasing beloved filmclassics, including Ghostbusters
, the Wrath of Khan and ourspecial 40th anniversary showing
of Goonies, there's somethingfor everyone.
Plus, participate ininteractive events from keynote
speakers each day to specialguest artists.

(42:10):
Tickets are on sale now.
Secure your spot for this epiccelebration at geekfestcom.
Get your badges, from one daypasses to VIP options, and don't
be left out.
Visit geekfestcom.
Geek Fest West, the biggestgathering of geek fandom in
Snohomish County.

Speaker 1 (42:30):
The biggest gathering of geek fandom in Snohomish
County, the segment we've beenwaiting all week for Mark's.

Speaker 5 (42:42):
Whiskey Mumble, may 20th.
What are we doing today?
What?
Are we celebrating with ourwhiskey May 20th.

Speaker 3 (42:50):
Oh, we're celebrating with our whiskey may 20th, uh,
or we're celebrating with ourwhiskey.
It's the what?
The 520 instead of 420, 420,wow marijuana.

Speaker 2 (42:58):
No 520 um.
We are celebrating the abilityto share whiskey with friends
wow.
No, it's not that, corny, okayyeah, wow, wow, you can say that
.
Okay, what is?

Speaker 3 (43:14):
it.

Speaker 5 (43:14):
We're celebrating national coffee, coffee divining
day today, may 20th, is be amillionaire day.

Speaker 3 (43:22):
Oh, okay, okay well, I can get behind that who's
giving out, who's giving out themoney, so I can be a
millionaire well, let me tellyou, mike, when you think of
being a millionaire, what's thefirst thing that comes to your
mind?

Speaker 2 (43:35):
uh, not working.

Speaker 5 (43:36):
I think of the tv show who wants to be a
millionaire whatever yourpersonal dreams may be,
celebrate today by doing atleast one thing that makes you
feel like a million.
Oh so, treat yourself that.
Oh okay, treat yourself.

Speaker 3 (43:51):
Is that what you're saying?
Don't, don't, let, don't look,you made, you made odies over
here odie knows where that'sfrom.

Speaker 5 (43:59):
Oh, I do not know where that's from parks and reds
.
Oh, I do, okay, okay okay, oneof the best scenes ever on tv.

Speaker 3 (44:08):
I feel like that.
Feel like that's a little bitof a letdown, Mark, you know the
whole, not the saying, but thewhole be a millionaire day.

Speaker 5 (44:18):
Well, mike, we are celebrating be a millionaire day
with Blanton's gold.
Oh, I get it.
The bourbon choice of thoseliving the lifestyles of the
rich and famous, minus the yachtOkay, with bourbon wishes and
barrel-proof dreams.

Speaker 2 (44:32):
If you ever, oh no, we're going to go in there.
What was that?

Speaker 5 (44:36):
Teodi didn't laugh because she doesn't know.
She doesn't know what thatmeans Famous.

Speaker 2 (44:40):
What's his face?
That did the?

Speaker 5 (44:41):
rich and famous.
Yeah, lifestyles Hi.

Speaker 3 (44:46):
What was his name?

Speaker 4 (44:49):
Hang on a second?
I don't remember.
Oh, I remember.

Speaker 2 (44:51):
I see it as a lifestyle.
Okay, keep on going Anyway,Mark.

Speaker 3 (44:54):
Lifestyles of the rich and famous, that's pretty
good, wasn't it.

Speaker 5 (44:59):
Yeah, I can almost hear his name, all right.
Blanton's Gold Edition wasoriginally released exclusively
for the Japanese market in 1993,nearly a decade after the debut
of the original blanton singlebarrel in 1984.
It was then later also releasedin europe.
In 2020, after 27 years ofbeing available only

(45:20):
internationally, blanton's goldedition was introduced to the us
market, allowing the americanbourbon enthusiasts to
experience this once exclusiveexpression assuming you can even
find it.
What can I say about Blanton'sGold?
Well, quite honestly, a lot.
It is a delicious pour thatstands well above its more

(45:42):
common sibling.
I usually keep regular Blantonson hand as a crowd pleaser
tater bottle.
It is fun to share, but notsomething I reach for myself.
Lanson's gold, on the otherhand, holds a special spot on my
shelf.
It's reserved for moments, myfellow whiskey lovers, and for
those times I want to treatmyself, and I'm sharing it with
you too and with us.

(46:04):
It offers a richer mouthfeel asubtle Kentucky hug.

Speaker 3 (46:07):
I feel like a millionaire right now, yeah.

Speaker 5 (46:10):
Mike, and a burst of those classic bourbon notes we
all chase.
Simply put, this one is astandout and well worth the hunt
to find a bottle or have a pourat a bar.

Speaker 2 (46:19):
Now is this like champagne wishes and caviar
dreams.

Speaker 3 (46:22):
Oh, there it is.

Speaker 4 (46:23):
As Robin Leach would say.
Robin Leach on the Lifestylesof the.

Speaker 2 (46:27):
Rich and Famous there you go.

Speaker 3 (46:30):
You remember that show already.

Speaker 2 (46:31):
Do you have?

Speaker 3 (46:32):
any idea what that show is.

Speaker 4 (46:34):
I have no idea what you guys are talking about.
But I have heard that phrase orthat.
Whatever you want to say.

Speaker 3 (46:41):
That's when he closed out each of the shows.
That was his close.
You have to be old to know that, like our friend Greg Minab,
yeah, this is a program to showyou how bad off life really is
for all of us poor slobs.

Speaker 2 (46:53):
Okay.
So what does Whiskey Christhink of this?

Speaker 5 (46:58):
Well, he's got a bottle of this exact one too.

Speaker 2 (47:01):
He loves it.
You notice, my glass is gone.

Speaker 5 (47:03):
I know so does Mike's .
I'm the only one with a littlewhiskey left.

Speaker 4 (47:08):
So it's almost $300?
.

Speaker 5 (47:10):
Well, that's like if you go to a certain liquor store
that really do museum prices,it's 105 as MSRP.

Speaker 7 (47:19):
Okay.

Speaker 5 (47:20):
And trading on secondary market.

Speaker 3 (47:23):
You know what else has MSRPs Cars, that's right.

Speaker 4 (47:27):
Without giving away your mumbles.
Would you buy it, Nathan?
Absolutely.

Speaker 2 (47:33):
No, it's too much.
I would have a bottle.
I have Blanton's up there on myshelf.

Speaker 4 (47:40):
You inherited that you didn't buy it.

Speaker 8 (47:42):
No, I bought that.

Speaker 4 (47:44):
Nathan's the cheapo.

Speaker 5 (47:46):
I'm not always a cheapo, did you?

Speaker 2 (47:49):
buy the Blanton's?
Yes, I did, I'm 100% sure Didyou buy the Blanton?
Yes, I did, you sure, I'm 100%sure Did you buy it at MSRP.

Speaker 3 (47:55):
I bought it once more .
I still buy all my stuff, allright.

Speaker 2 (47:58):
Whiskey and technology are a great pairing,
just like the only two songssung at every Major League
Baseball game.
Do you know what they are?
It's Take Me Out to the BallGame and the national anthem.

Speaker 3 (48:09):
You know you having to explain that just takes the
air out.

Speaker 2 (48:13):
No, that's a great combination of pairings.
All right, let's prepare for it.

Speaker 5 (48:16):
And you know what?
Baseball is right, Odie.
Wow, the young kids don't dobaseball.

Speaker 3 (48:23):
Well, I think that's a fair question.
She knows who that is If youdon't know, are you serious?

Speaker 6 (48:28):
fair question.
She knows who that is.
If you don't know, are youserious?
The station rebroadcasts theminor league team.
I have to sit through baseballevery week.

Speaker 3 (48:36):
There it is.
I have to sit through it Well,and I played softball Okay so I
do know what position did youplay.

Speaker 4 (48:42):
Catcher and outfield.
Don't feel bad.

Speaker 3 (48:45):
I mean lifestyles of the rich and famous.
You don't know about.

Speaker 2 (48:51):
Now let's move on to our technology fail of the week.
This is our leader.
This is now the fourth timethey have had a technology fail
on our show.

Speaker 5 (48:58):
They are the leader.

Speaker 2 (48:58):
You know when they kind of do like the five person
club for Saturday Night Livewhen you host five shows.
So this is now becoming ourfourth.
This is our leader.
Our fourth, are you?

Speaker 5 (49:08):
sure Tesla's not ahead of them?
No, they're not, they're tied.

Speaker 2 (49:11):
They're tied.
Okay, all right, here we go.
Congratulations, you're afailure.

Speaker 6 (49:16):
Oh, I failed.
Did I, yes, did I.

Speaker 2 (49:20):
Yes, all right, our fourth time we've had this
company on.
It is Waymo, our technologyfailure.
Now you know what.
What happens to?
Let's just talk about whathappens when you get in a wreck
in a Waymo car.
Do you know what happens?
You die horribly no it's justwho gets the ticket.
Does the driver, does thecompany?
That's still up in the air.

(49:41):
Depending on which state you'rein, what do?

Speaker 5 (49:43):
you mean, there is no driver?

Speaker 2 (49:51):
How is that up in the air as a passenger?
As a passenger there'sliterally nothing that you can
do to influence the Well.

Speaker 5 (49:54):
It depends on if you're in California or Arizona
or Texas.
You're sitting in the back seat.
Yeah, all right, let's continue.
More than 1,200.

Speaker 3 (49:59):
I guess if you're stupid enough to ride in one,
you should probably get billedfor it, all right.

Speaker 2 (50:04):
More than 1,200 Waymo cars had to get their software
updated because of reportedincidents in.
Waymo cars had to get theirsoftware updated because it
reported incidents in Waymo'svehicles hitting gates, chains
and other road obstacles.
Like just going for and hittingit.

Speaker 3 (50:15):
This is the same one that couldn't navigate snow
right?
Yeah, that's right.

Speaker 2 (50:18):
That's right, phil talked about that on our stuff.
So the volunteer recall wassubmitted last week to the
National Highway Traffic SafetyAdministration regarding the
affected fifth generationautomated driving systems.
However, officials said theissue was resolved through a
software update that was rolledout starting november 2024 to
the 1000 plus cars impacted.

(50:39):
The recall doesn't impact anyway more cars currently on the
road because those are, I guess,sixth generation and above.
Before the update, thedriverless cars caused minor
crashes with chains, gates andother obstacles on the road.
No injuries reported and thenhtsa started an investigation
into waymo in 2024 afterreceiving reports of 16

(51:00):
incidents regarding minorcrashes.
Now waymo provides more than250 000 paid trips every week in
some of the most challengingdriving environments in the US.
Now they say that they holdthemselves to higher standards
and our records indicate lessinjuries over the tens of
millions of fully autonomousmiles driven on our
technology-based roads to makethem safer.

(51:23):
Now a Waymo spokesman says thatthey are the world's most
trusted driver.
A voluntary recall submissioncomes after more than 600
self-driving cars needed asoftware update in June of 2024
because one hit a telephone pole.

Speaker 3 (51:37):
I want to ask the Waymo spokesperson if he uses
Waymo.

Speaker 2 (51:42):
Do you guys use the product?
I'm sure he does.
He's probably like 16 years old.

Speaker 5 (51:45):
I might disagree with this a little, Mike.
I've been traveling a lot, andhuman drivers are horrible.
Are they Absolutely horrible?
I'd rather take a car thatoccasionally hits a gate in a
chain than most humans?

Speaker 3 (51:56):
Really, what about when they can't navigate
construction and drive intoconstruction zones.

Speaker 4 (52:04):
They can't even park in their own parking lot.
Yeah, they circle the parkinglot.

Speaker 2 (52:07):
Sometimes they park in the middle of the
intersection and let let peopleout.
Now you can exit.
Well, isn't that like a lot of?

Speaker 5 (52:12):
humans yeah, a lot of humans just park anywhere, as
all the uber drivers park in themiddle of the street.
You go to a parking space.
I can, I can, I can hang on Ican get behind.

Speaker 3 (52:24):
I can get behind that in partly because I I drive
every day and I see stupidpeople and you know I I'm not
the best driver in the worldeither, but I'm a known entity
all right.

Speaker 2 (52:35):
Well, guess what?
They've had now 202 crashessince in arizona since 2021 to
2024, and only 31 of themresulted in any injuries.

Speaker 3 (52:44):
Only 31 of them resulted in injuries all.

Speaker 2 (52:47):
And officers determined the self-driving cars
are not at fault.
87% of the time.

Speaker 5 (52:52):
Bingo.

Speaker 2 (52:53):
So if you're in the 13%, best of luck.
You're just sitting there andyou're locked in a cage and
hopefully no one hits you in theback, which happened to some
individuals.

Speaker 5 (53:01):
So humans 87% fault, waymo 13% fault.
All right.

Speaker 2 (53:06):
You know what I like that number.
Let's move right now,immediately.
Can we go right into the Nathannugget?
All right, let's do that.

Speaker 3 (53:12):
Apparently, we can't hear it.

Speaker 1 (53:17):
This is your nugget of the week.

Speaker 2 (53:20):
Hopefully your mic wasn't on there, right?
Okay, a little dead air and wedon't want to hear those
profanities.
That was a bleep out All right,those profanities.
That was a bleep out all right.
Now apple users have sevenweeks to claim their 100 back.
Make sure you listen to episode252 to learn more.
But today we're continuing withsome apple news.
Now the iphone 17 release date.
When is the next iphone comingout?
Over the last several years,apple has consistently announced

(53:42):
its new phones in the firsthalf of september.
This will likely be the caseagain with the full iphone
lineup planned.
Now Apple could move to the USphone production from China to
India, and they've alreadyescaped many of the tariff hikes
thanks to a reciprocal tariffexemption list that includes
many phones, laptops and otherelectronics that Apple directly

(54:04):
produces.

Speaker 5 (54:06):
That's interesting.
Do they donate to theinauguration $1 million fund?

Speaker 2 (54:12):
they all did.
The iPhone 17 Pro's camera hasbeen subject to multiple rumor
changes.
Most notably, the Apple mayhave a horizontal camera bar
that spreads across the width ofthe phone, so when you hold it
on up, you can take full sizedlandscape pictures.
Now the front facing selfcamera could also get an upgrade
.
A leaked image on X suggestingthe iPhone could feature a

(54:33):
pill-shaped camera bar thatlooks a lot like the camera bar
on the Google Pixel 9 phone.

Speaker 5 (54:39):
Is Odie going to run out and get one of these?

Speaker 2 (54:41):
No, Look at that.
She's like, she's like no.

Speaker 4 (54:44):
I still have the 13.
I have it upgraded.
Okay, until this dies youbetter hurry 13.

Speaker 5 (54:50):
I have it upgraded.

Speaker 3 (54:50):
Until this dies, I'm not going to upgrade the
technology hostage is going tokick in and you're going to have
to.

Speaker 2 (54:56):
Now you know what With the new iPhone 17, the
models will have to set up tomore memory.
They're going to have to have12 gigabytes of RAM minimum,
because they can't do all theirAI stuff.

Speaker 5 (55:05):
that's available 12 gigabytes.

Speaker 2 (55:07):
They're going to have to do that pretty big jump.
They have larger hard drivespace that will be needed to
actually make sure that they dothe algorithms in the back.
And with that guess what?
You're probably going to losebattery life.
It looks like the new phonewill probably only have the 8 to
10 hour battery life return, asApple has been plagued with
before.
There you go, yeah, use your.

Speaker 3 (55:29):
Apple device.
It goes dead in an hour.

Speaker 2 (55:31):
Yeah, my, my.

Speaker 3 (55:33):
I don't even know.
I think it's an iPhone 15.
Yeah, for work.
Yeah, it goes dead after a dayof non using it.

Speaker 2 (55:43):
Yeah, it's welcome to Apple.
All right, now let's move on toour pick of the day Whiskey
tasting All right Now let's moveon to our pick of the day
whiskey tasting.

Speaker 1 (55:53):
And now our pick of the day for our whiskey tastings
.
Let's see what bubbles to thetop.

Speaker 5 (55:58):
Today we're drinking Blandon's Gold Edition from
Buffalo Trace Straight bourbon103 proof $105.
All right, absolute, but morethan likely it's going to be
$200 to $300.

Speaker 2 (56:10):
Yeah, thumbs, big time up.
Love this, love this, love thisprobably the best whiskey I've
had since since yesterday threeweeks ago all right, mike, what
do you get?

Speaker 3 (56:21):
okay, I don't even think I have to say it, because
I like it.
It's good, it's a thumbs up forme.

Speaker 2 (56:27):
And this is on your shelf yeah, on your top shelf,
oh yeah, all right, there you go.
You love it a lot.
It's getting.
It's getting low, though I'mgood.
It's a thumbs up for me.
And this is on your shelf yeah,on your top shelf, oh yeah,
alright, there you go.
You love it a lot.

Speaker 5 (56:31):
It's getting low, though I'm going to have to baby
.

Speaker 4 (56:34):
this Are you going to treat yourself.

Speaker 5 (56:36):
Oh, you know it, Odie .

Speaker 2 (56:39):
Are you going to treat yourself for today?
Yeah, yeah, treat yourself welltoday Be a millionaire.

Speaker 3 (56:45):
Yeah, you know what I mean.
You did buy a motorcycle, I didbuy my motorcycle.

Speaker 5 (56:50):
Wow, you are a millionaire.

Speaker 3 (56:52):
Yeah, I got a million dollar life insurance All right
.

Speaker 2 (56:57):
Well, you know what we thank so much for you guys
listening to our show.
It's the fans like you that puton this show, and it's a crazy
producer like Odie that makessure it all works All right.
We want to thank everybody thatlistens.
Always remember the science oftomorrow starts with the
technology of today.
We'll see you guys next weekLater.
Bye-bye.

Speaker 1 (57:20):
Thanks for joining us on Tech Time Radio.
We hope that you had a chanceto have that hmmm moment today.
In technology.
The fun doesn't stop there.
We recommend that you go totechtimeradiocom and join our
fan list for the most importantaspect of staying connected and
winning some really greatmonthly prizes.
We also have a few other waysto stay connected, including

(57:41):
subscribing to our podcast onany podcast service from Apple
to Google and everything inbetween.
We're also on YouTube, so checkus out on youtubecom.
Slash TechTimeRadio all oneword.
We hope you enjoyed the show asmuch as we did making it for
you From all of us atTechTimeRadio.
Remember mum's the word have asafe and fantastic week.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

The Clay Travis and Buck Sexton Show

The Clay Travis and Buck Sexton Show

The Clay Travis and Buck Sexton Show. Clay Travis and Buck Sexton tackle the biggest stories in news, politics and current events with intelligence and humor. From the border crisis, to the madness of cancel culture and far-left missteps, Clay and Buck guide listeners through the latest headlines and hot topics with fun and entertaining conversations and opinions.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.