All Episodes

September 27, 2024 39 mins

Are we going to be able to trust AI?  More importantly, are we going to design AI that is worthy of people's trust?

Going back to our roots, host Ricky B. chats with someone he met in a coffee shop.  Frederick Kautz gives lectures in the tech world. He is involved with cyber security, and Open Source.

If you want this podcast to continue... please help. $1 each month will be a big help. Click here to donate at Buy Me A Coffee.

If you would like to contact The Fremont Podcast, please text us here.

Petrocelli Homes has been a key sponsor of The Fremont Podcast from the beginning. If you are looking for a realtor, get in touch with Petrocelli Homes on Niles Blvd in Fremont.

Haller's Pharmacy is here to help. They have been in our community for decades.


Founder: Ricky B.

Intro and outro voice-overs made by Gary Williams.

Editor: Andrew Cavette.

Scheduling and pre-interviews by the amazing virtual assistant that you ought to hire, seriously, she's great: your.virtual.ace

This is a Muggins Media Podcast.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Are they going to have negative impacts?
Are they going to have errorsthat cause aircraft to fall out
of the sky?
I mean, that's what I'm saying.
We shouldn't just dismiss thefears, we should understand what
and where and try to find waysto make sure that, in time,
we're developing these thingsnot just to drive the technology

(00:23):
forward, but that we're alsodoing so in a way that people
can come to trust.

Speaker 2 (00:28):
Coming to you straight from Fremont,
california.
This is the Fremont Podcast,dedicated to telling the stories
of the past and present of thepeople and places of the city of
Fremont, one conversation at atime.
Now, here's your host, ricky B.

Speaker 3 (00:44):
I'm going to leave the door open because for the
most part it just doesn't.
The sound out there doesn'ttransfer, but even if it does, I
don't mind the ambient sound ofthe community around us.
Honestly, I feel like thisconversation, this interview, is
kind of going back to the heartof what inspired me to start

(01:06):
the Fremont podcast, and that ismeeting somebody at a coffee
shop and having the onlyconnection that you have with
that person in that moment to behaving a cup of coffee.
And you and I met at DevoutCoffee.
We've interacted on a number ofoccasions.
I think one time we had asignificantly longer
conversation together just aboutlife and how we grew up or

(01:30):
whatever, and yeah, it's beenfun getting to know you.
So thank you for being on thepodcast with me here.

Speaker 1 (01:36):
Thank you for the invitation.

Speaker 4 (01:37):
Today, we'll join Fred as he pulls back the
curtain on the unsettling truthof software security and invites
you to reconsider your approachto trust in the cloud.
Please welcome Frederick Kautz.

Speaker 1 (01:51):
So before we get, before we jump into the topic,
first I want to you work broadlyin cybersecurity, basically, or
what do you, what do you what?

Speaker 3 (02:00):
would you what would be a broad umbrella for work
that you do?

Speaker 1 (02:03):
So I wear multiple hats when it comes to my
profession.
So one side is I participateheavily in a thing that's called
open source, which is likesoftware that is given out for
free, and it's like the samething that a bank or an airline
uses.
You can literally go anddownload and use yourself.

(02:25):
The barriers tend to bedifficulty to use some of the
software, so sometimes some ofthem are super easy and others
sometimes are just super complex, but the tools are there.
You don't have to go and buildup a huge team and spend lots of
money on a particular thing.
You can get started with one ofthe things and then, as you
start to build out your, yourcompany, then you can find

(02:47):
support, or maybe you supportyourself and contribute back to
the community.

Speaker 3 (02:51):
Okay, well, that's cool.
I know that one of the thingsthat I've heard from you, um, as
we've met and we've talked todevout, is that, um, you are
oftentimes a speaker at largeconferences or at I don't know
if they call them summits orwhatever, but so what is it that
you do, what is it that youoffer in those conferences and

(03:14):
stuff that makes you a speaker?
Tell me a little bit about whatyou would be speaking on in
those instances in thoseinstances.

Speaker 1 (03:29):
So the largest engagement I had.
There's a major softwareproject that's called Kubernetes
and for a period of time I wasthe co-chair for the whole
conference.
So I co-chaired three of themTo give you a sense of the size.
We would work with companiesfrom all over the world.
We would see people from Googleand Microsoft and other major

(03:52):
similar size companies wouldcontribute, but also lots of
startups too.
And when we would run an event,we would run one here in the
United States or I should sayNorth America, and then there
would be another one in Europesomewhere and sort of alternate
between them.
And now they've expanded out toto China and and other similar

(04:14):
or to other religion regions aswell and part of where.
So the size of the conferencewas roughly somewhere between 10
to 12,000 in-person people andthen there was also a whole
online component as well.
What's interesting is that allof the talks that we did there
almost all of them would berecorded.
And then they're put on YouTube.

(04:35):
There's no paywall or anything,you just go and learn.

Speaker 3 (04:39):
Access them.

Speaker 1 (04:40):
And so then the question is, why would this
still attract so many peoplewhen you can literally just go
to the YouTube page and see it?
And it turns out that there's areal value in the community
itself.
Like some people, they want toexpand their network If maybe
they're looking to get to joinin in a particular project or
start a new project to join inin a particular project or start

(05:03):
a new project, but maybe it.
Maybe some of them are tryingto make sure that, uh, they have
better job opportunities movingforward.
Of course, companies are alwayslooking for people as well, so
there's a.
There's a huge advantage interms of the community there.
And, uh, often there's there'sa uh, people say like, oh,
there's a security track, anetwork track and all these

(05:23):
things, and people say, well,what's your favorite and it's
all.
It's the hallway track, the onewhere I get to meet and talk
with people.
That's awesome.

Speaker 3 (05:30):
Yeah, so for me that's like the big one.
That's cool.
Yeah, I think that that'sinteresting because when you
think of technology or you thinkof, I guess, digital platforms
however you want to phrase it Ithink oftentimes those platforms
, those venues, can be kind ofan escape from the hallway, the

(05:53):
hallway venue or the topic, butI love the fact that you enjoy
connecting with people on that.
So, when you would speak insome of these conferences and
and help host them um, you hadmentioned in just uh, in passing
, a minute ago that you helpedbuild some of the structures I
guess that worked that some ofthese companies working on 5g

(06:15):
were you using.
Is that the kind of stuff youwould talk on, or what are the
kind of the things?
What are some of the thingsthat you would?
You would talk about it is oneof the topics.

Speaker 1 (06:23):
So the term, the thing that I pushed forward that
changed a significant portionof that industry was there.
There was a desire by serviceproviders.
So when you say serviceproviders, think of like
telecoms or back the, the groupswho like route portions of the
internet yeah and there was adesire for them to move towards

(06:47):
Kubernetes that I describedbefore, which is that
distributed like how do you runthings in a distributed manner?
And one of the problems thatthey ran into is that you can't
just take what you did beforeand then shift it into another
environment and then expecteverything to just work.
Because if we could just dothat, why would you leave the

(07:08):
first environment?
Because it already meets your,your needs, and so there's
something new, there's somethingthere that they want, but it's
but it's like it's almostsimilar to like if you, if you
move to another country, youwant to learn the language, you
want to learn how the the roadrules work.
You want to learn the language.
You want to learn how the roadrules work.
You want to make sure that youhave some level of understanding

(07:29):
.

Speaker 3 (07:30):
Yeah, in some ways yeah, that's interesting In some
ways, like if I were just to goto a different country there's
going to be some very humanelements that are going to be
consistent Eating, drinking, youknow, sleeping.
There's going to be some thingsthat you're going to find in
various places that are notgoing to be any different than
you would at home, but then whenyou?
But then there's also going tobe elements of that that are

(07:51):
going to be very foreign.
Yeah so in the software world,though, your job is to kind of
like help figure out how to takesomething from one environment
and make it work in a differentenvironment, or well, in this
scenario for the.

Speaker 1 (08:02):
Well, in this scenario for the telecom, the
problem that they had was theywanted to move to this new
environment because it wouldallow them to basically run at
like a larger scale and allowthem to move faster and have
like a more unified way of doingthings.
But there was no good guidanceon how to make use of the

(08:25):
environment.
So it's sort of like it's likea travel guide, you know, using
the travel theme, imagine youhave like a travel guide that
says you'll have a, you'll havea better time if you learn these
particular things.
So it's sort of like that.
So the, the term, the thing thatwe created, so there's a thing
called a network function okayand please stop me if I get too
no, that's great yeah a networkfunction is like something that
happens in a network, like ifyou're connected with a Wi-Fi,

(08:51):
like that radio itself isconsidered to be like a function
of that network.
If you have like a firewall toprotect your system, to keep bad
actors out, that's anothernetwork function.
So we ended up creating thisthing that was called a cloud
native network function.
So it's like they're designedto live in this environment and
to know how to interact withthat.

(09:12):
So like, if you need more, itknows how to spin them up, if
you need to shut them down itknows how to like bring them
down properly, and so we endedup creating this thing that was
the cloud native networkfunction the CNF is what it was
called and I put together, likehere's a list of like 15 rules
that you need to follow um andin order to, in order to develop

(09:33):
one effectively yeah and therewere not like detailed, like
super detailed worlds.
There were more like heuristicsor like top level, like um high
level guides in terms of likehow to have a good time in this,
in this space, and it justcaught on like wildfire like I
actually would go to uh,actually help run a telecom

(09:53):
conference in the past before Idid the cube con one okay and
that one I like I when I wouldgo to.
When I went there, people likealmost every booth would have
cnfs like written on top of it.
Here's like we do cnfs.
I went to mobile world congress.
I found in that's in Barcelona.
It's like a hundred thousandpeople all show up there yeah

(10:14):
and some of the booths likeliterally had that same term and
they're saying that we do themas well and I was just like it
just like blew my mind becauseit's like just toss out this
little piece of information tobe helpful and it turns out that
a lot of people in thatindustry had some benefit from

(10:36):
understanding how to make thathappen.

Speaker 3 (10:38):
That's cool, that's cool.

Speaker 4 (10:41):
Please consider donating $1 a month on a
reoccurring basis to help thispodcast that you enjoy.
Buymeacoffeecom.
Slash the Fremont Podcast,slash membership.

Speaker 3 (10:55):
So did you grow up here in Fremont?

Speaker 1 (10:56):
No, I grew up in Texas.

Speaker 3 (10:58):
Okay, okay, what part of Texas?

Speaker 1 (11:01):
El Paso.

Speaker 3 (11:02):
Okay, and so what was it that brought you to the Bay
Area then Texas El Paso?

Speaker 1 (11:04):
Okay, and so what was it that brought you to the Bay
Area then?
So when I finished university,or I should say when I finished
my undergrad, it was middle ofthe housing crisis.
So 2008, 2009 time period and Iwas looking for work took me
several months to find something.
It happened to be out here, andso once I got that job offer, I

(11:26):
shoved everything into a carand drove across.

Speaker 4 (11:29):
Wow.

Speaker 1 (11:30):
Didn't even have a place, like planned, to stay.

Speaker 3 (11:32):
So I'm curious.
You know the comment that youmade earlier about liking the
hallway.
I think that's great.
I think it's fascinating,though, too, because I feel like
a lot of what makes this areaso popular for people to move to
reason that you came here, nodoubt, was because of the

(12:04):
centricity of technology andtechnological advancement.

Speaker 1 (12:05):
Well, I came here for the job, you came here for the
job, yeah, yeah, but I stayedfor the people.
There you go, there you gothere you go.

Speaker 3 (12:09):
Well, I guess what I was going to say is I feel like
people come here for jobs and ithas to do with a tech world, a
digital world.
And it has to do with a techworld, a digital world, and yet
you still find obvious joy andenergy from interacting with
people and talking with people.
So I'm curious what are yourthoughts on the digital space?

(12:32):
Even AI as it takes, as ittakes in some sense takes away
from human hands humanfingerprints, human creativity
in some sense?
I mean, obviously, I think AIis built off of whatever we
imagine.
So there's like a sense of aresidualness of humanity built

(12:57):
into what AI does.
But for someone who seems tolove engaging with people,
someone who seems to interactwith people but also working in
a digital space, that seems tobe, I think, creating a lot of
fear for some people, or atleast curiosity as to what this
is going to end up looking like.
What is your thoughts aboutwhat you do and how you're

(13:20):
involved in what you do, versusa more human, physical world,
kind of like a reality?

Speaker 1 (13:29):
That is a great question.
It's also much more open-endedthan you may realize, and so
just to scope it down a littlebit, so when people talk about
AIs, they usually are thinkingof what they call generative
models or large language models.
Generative could include soundor something that's visual,

(13:49):
video or photos, but there'salso lots of tiny AIs, like
little AIs that fit in thesmallest of computers, that are
also very useful and they allhave their place, but I think
for this particular one, I thinkwe should talk about the larger
ones.

Speaker 3 (14:10):
Because that's the one that everyone is focused on.
I think that's a greatdistinction, though, too,
because there's probably a lotof AI that we've become used to
for a long time that allows usto do the things that we need to
do, and if it were to go awayentirely, we probably would lose
our minds.

Speaker 1 (14:27):
Well, the little ones are quite interesting because
they're designed to do usuallyone task, and those tasks could

(14:47):
be something as small and assimple as how do I make the
engine in a car slightly moreefficient so that we get better,
more power out of it and lesswear and tear?
Or they could be relatedtowards in the self-driving car.
It's like what speed should Itravel at, like when you have
the cruise control is a goodexample, they can now like
monitor the front likethere's a there's actually an
application of ai there thatyou'll often find because it's.
They'll have to like monitor,like what's going, what's in

(15:09):
front of them.
They have to make decisionsyeah, that's great so you're
like these tiny eyes and wedon't even think about those
ones, um, and I think over timewe're going to find them
everywhere, doing these tinylittle tunings that we used to
have to do manually and now itjust doesn't.

Speaker 3 (15:23):
Yeah, that's right, that's right, but yeah, so the
big ones like what are yourthoughts on that?

Speaker 1 (15:28):
So the big ones, I think, are, as a society, we're
still trying to work it out,like there's a lot of excitement
and there's also a lot of fear,but we need to be careful not
to dismiss the fears of peopleas well.
It's not just an issue of likeeven with the internet uh, back
when it was coming around likewe can educate people on how to

(15:49):
effectively use it, how to adaptto it.
Uh, help people work out whichjobs are going to change.
There are jobs that some ofsome of the roles are literally
faded out of existence.
Um, and something else,something else replaced them.
Yeah, um, and it wasn't alwaysa job that replaced them.
Sometimes it was like,literally, a program would do

(16:10):
some, some work, and so it's.
I think it's important, as, uh,as technologists, to help people
understand where we're goinghere, and I think it's something
we could do a lot better jobwith is how people understand,
because I think a lot of thefear comes from.
It comes from that like what is, what is the future going to to
look like, or are these thingsgoing to steal or are they gonna

(16:32):
?
Are they gonna steal our jobs?
Are they going to?
Are they going to have negativeimpacts.
Are they gonna to?
Are they going to have negativeimpacts?
Are they going to make, arethey going to have errors that
cause aircraft to fly out, tofall out of the sky?
I mean, like the like we that'swhat I'm saying like we
shouldn't just dismiss the fears, like we should understand what
and where and try to find waysto make sure that, in time,

(16:54):
we're developing these thingsnot just to, not just to drive
the technology forward, but thatwe're also doing so in a way
that people can come to trust.

Speaker 3 (17:08):
Yeah, yeah, and because they have reason to
trust it.
Yeah, yeah, that's good.
So what are the things?
I'm just curious from yourperspective and maybe this is
too personal, but what are someof the things that you're
excited about when it comes toAI and what are some of the
things that you might have fearknowing what you know?

(17:28):
They say ignorance is bliss.
A lot of times, people who areignorant, they have no fear of
whatever it is that they youknow like I think of my one year
old, or my two year old now,who you know has is learning
that there is a fear of crossingthe street without you know,
without a parent.
So for him, it's just I'm goingto go get the ball, and that's

(17:50):
what's blissful is, I can getthe ball because there's nothing
in my way, but as a parent, itcreates fear in me.
So what is it from yourperspective, with the knowledge
and experience that you have,that either brings you hope and
excitement about the future,with AI present, and then what
is it that maybe brings you fear?

Speaker 1 (18:10):
So I tend to think of these things both in long-term
and short-term.
I'll start with the short-termones.
Those are a little bit easierand a little bit more concrete
Things.
That I'm really excited on theshort-term is we're starting to
see AI applied in healthcareenvironments, specifically
around.
How do we develop or discovernew drugs, as an example?

(18:34):
So one of one of the thingsthat they, that they, that we
try to do, is we have these.
Uh is a look at a certainprotein and they'll try to work
out that, based on this protein,uh will is.
Is it, does this protein ordoes this uh, or does this
molecule have an impact on uh,let's say, a virus or or
something similar or somethingsimilar.

(18:56):
And so the idea when they'relooking for, let's say, cancer
treatment, or they're lookingfor how to eradicate certain
types of diseases, or maybe noteradicate, maybe you could just
be keeping them under control sothat maybe a person still gets
the disease but the impact is nolonger there.

Speaker 3 (19:12):
Yeah.

Speaker 1 (19:13):
And I think, the AIs.
When you think of AIs, well,there's different kinds.
The AIs.
When you think of AIs, well,there's different kinds of AIs,
but the one that most people arefocusing on are they optimize,

(19:33):
Like they'll have to take thishuge space and then they'll try
to optimize in some way in orderto see how they impact, and you
have to actually do the math inorder to work through it.
It's like those are areas whereperhaps they can help out
tremendously because they canlook at the results.
They can make a decision and say, well, what if we change this
or what if we change that?
And then it's like on autopilotat that moment and starts

(19:55):
spitting out useful moleculesthat need to be investigated.
So I think a little bit of acomplex answer, but I think,
like when you start looking atit from that perspective, I'm
really excited to see what kindof things that can come up with
that.
So lower the cost ofdiscovering some of these,

(20:16):
because part of why it's not theonly reason why there's some
social reasons that exist as towhy some medicines are so
expensive.
Okay, but newer medicines, oneof the things that the companies
argue about is that they haveto recoup their research cost.
Right, right and if thatresearch cost is significantly
reduced, then us as a societycan argue that we should have

(20:37):
more accessibility.

Speaker 3 (20:37):
Oh, that's interesting yeah.

Speaker 1 (20:39):
So which really targets the economics of it as
well.

Speaker 3 (20:42):
That's great.
Yeah, so, yeah.
So there's definitely aneconomic upside to a lot of this
.
If some of this can be donethat would otherwise cost the
researchers a lot of money andtime and therefore pass that
cost along to the consumers thenall of a sudden that it becomes
inaccessible.

Speaker 1 (21:03):
That's the hope now, whether that happens or not
right.
Right is is much more complex,but um, but it provides the
basis for it to happen sure,sure um, in terms of fears, the,
the short, the short-term fearthat I have the most is uh,
there's, there's two of them sothe first one is there is a

(21:24):
argument that is going on rightnow that a lot of companies are
looking at their finances andthey're afraid of falling behind
on ai, and so they want to spinup these huge clusters.
And they want to spin up thesehuge clusters and they want to
spin up these huge training sets, but they have to get the money
somewhere.

Speaker 2 (21:42):
So what's?

Speaker 1 (21:42):
the most readily available place of funds.
In some scenarios it's not likebringing a product to the
market and now they have morefunds, and then they put that
into research and development.
It sometimes becomes well,let's get rid of these 5,000
people, and then that'll free upour interesting, so yeah uh, so
it's not that like ai is goingto take your job by doing it

(22:03):
better than you.
It's like ai ends up affectingthe jobs.
At least, that's what thepattern appears to be, I'd yeah
I've not done enough researchinto it to know whether this is
actually what is happening but,just my, my initial view of of
some of these layoffs.
It seems to be tied towardsthat.
Well, how do we?

Speaker 3 (22:21):
free up funds so that we can do this other thing.
I found the irony to be alittle bit hilarious, in my
opinion, because I don't knowwhat I'm talking about, but it
was still hilarious.
I remember seeing a sign on thepeninsula a number of times.
I think it was Enterprise times, I think it was Enterprise AI I
think it was the company nameor something like that but they

(22:42):
said now hiring.
And I just remember thinking so.
I mean, it kind of seems like adoom and gloom sort of scenario
, Like if you're really good andyou do your job really well,
you will no longer be employedwhen you're done.

Speaker 1 (22:58):
I don't want to go that far For me.
I'm a very optimistic person,yeah.

Speaker 3 (23:01):
I am too.

Speaker 1 (23:03):
And just the same way , the internet was going to
destroy a bunch of jobs itactually created more jobs in
the long run.
And my hope is that what itallows us to do is to if you
look at what humans are capablein, in, in as a whole, given a
given a chance, like I hope ithelps us move towards, uh, move
people away from certain veryrepetitive jobs and instead

(23:28):
gives people the opportunity touh, to to basically branch out
and to be able to work on thingsas, and if you have an idea
like you may not know how toperform that, on things as, and
if you have an idea like you maynot know how to perform that
specific task, but if you haveuh like I'll use a specific
example uh, let's say you wantto do research on a given topic,

(23:48):
uh, it's not uncommon now forpeople to do an initial pass,
say to open up an ai and ask,hey, what are, what is?
How does a car engine work?

Speaker 3 (23:58):
Or how does?

Speaker 1 (23:59):
doesn't mean the answer is gonna be good
necessarily, but at least givesyou a starting point and then
you can go and look up thosethings and have some starting
place.
So I think that there's a.
In the same way, like ascomputers over time started to
gain more traction, they createdlots of additional jobs.
My is that, uh, and mysuspicion is that that'll end up

(24:21):
happening.
The question is going to bebetween now and when we get to
that maturity, yeah, uh, what iswhat's going to happen in
between?
that is, yeah, it's really aquestion, yeah yeah, that's
that's.

Speaker 3 (24:33):
It's interesting.
I I like your perspective andI'm glad you're an optimist,
because I think that's helpful.
I remember reading a book notreally on this particular
subject, but it was more onculture and what it means to be
a culture maker.

(24:53):
I think that we are allsometimes the book kind of
called out people who were justlike we're trying to adapt to
the culture, and the point thathe was making in the book was
that we're all culture makers.
So whatever we do is not likeour adaptation to whatever it is
to the culture that we perceiveis not an adaptation to culture

(25:15):
.
It's actually culture making,not an adaptation to culture.
It's actually culture making.
You're actually making culture.
So he used the example likewhenever there's an invention
that kind of replaces or fixesproblems, replaces, you know
less adequate, you know devicesor whatever.
So like, for instance, you knowyou can use the iPhone as an

(25:37):
example, or whatever.
So like, for instance, you knowyou can use the iPhone as an
example.
So you go from having to have adesktop computer or a laptop
even, and you can accomplish alot of tasks on your handheld
device now.
Or even like a camera, like alot of people would buy a camera
and they would have a camerawith them and then they'd also
have a phone or whatever.
Now you can do all of that onone Well.
Well, that solves some problemsand that eliminates, like, all

(25:59):
of a sudden, like not as manypeople are out buying cameras
and not as many people are outbuying computers because they
can get it all on their cellphone, but then all of a sudden,
by the existence of the cellphone, there opens up a whole
new world of things that arethat are needed, like cell phone
cases.
There's insurance for cellphone, there's, you know,

(26:21):
support for being able to theactual support to be able to
carry the data you know fromphone to phone or whatever.
So in some sense, yes, you canmourn the loss of the camera on
one side, but on the other side,it's like there is a whole
other side of it that nowcreates a need for so many other
things as well.
So, if I hear you right, that'salong the lines of what I'm

(26:44):
understanding.

Speaker 1 (26:45):
It definitely is, and we have to as a society.
I believe one of our besttraits as humans is our ability
to adapt to new situations, andthis is yet another thing that
we have to learn how to adapt to.
So we learned how to adapt withthe internet.
We're actually still adaptingto it, because it's not like you

(27:07):
do it once and then it's done.
So, there's a continuedadaptation as we, as we learn
more and build more capabilities.
The internet my opinion here isthe internet is what led to the
creation of AI as it is today,because you needed lots of data.
And how do you get lots of datawithout something like an
internet?
So it basically became ainternet, became a giant content

(27:29):
generator.
That, uh, that people now feedinto these huge AIs, whether it
was legal or not, or that's adifferent story, but it is
certain that the current AI andthe way that it's heading a
large portion of it would notvery likely not have been.
I don't want to say it'simpossible, but it would have

(27:52):
been much more difficult to doit without something like the
internet.
So I think over time we'll learnhow to.
We'll learn how to adapt, likewe have we'll we'll.
We have to learn how to copewith things like as a society.
How do we cope with deep fakes?
How do we know that the thingwe're looking at is is real?
And we don't always have goodanswers to that yet.

(28:13):
But, uh, but, as a society, wewill be, be, will come to be
more suspicious of yeah, ofthings that we see on the
internet.
So we'll naturally develop someourselves.
In the same way that when I gointo into a store and I see a
tabloid there and I open up thetabloid and it's saying
something that's ludicrous, I'mthinking is this real?
right, right, and I think we'lldo the same with with the

(28:35):
internet.

Speaker 3 (28:36):
That's cool.
I asked you what your fearswere regarding AI.
I'm curious what maybe moreobjective dangers, as opposed to
just your fears, or you feellike generally as a society,
like what are some of thedangers that may be out there in
regards to AI?

Speaker 1 (28:53):
I mentioned, there were two fears that I had that
were on the short term, and thisplays into some of it, into
some of it the information an AI.
If you think about what an AIis today, it is something that
learns statistical propertiesand, specifically, neural
networks.
There's different kinds of AIsthat are not neural networks,

(29:16):
that work in completelydifferent ways, but again, the
one that everyone's focusing onis the neural network one, and
part of what they do is theylearn uh, they learn some
statistical property about thedata set, or you can even go far
enough to say, with the largerones, that they, they compress
some of the information and sortof almost develop a memory okay

(29:38):
, for lack of a better phraseyeah um and this um, so this
information, uh, on one sideit's like garbage in, garbage
out right so you need to makesure you have high quality
information to to train it on umand some of these.
Uh, some people have alreadydone a lot of work towards
trying to find uh, trying tofind bias within them.

(29:59):
Uh, sometimes it can be verydifficult to find some of these
bias because you think thatyou've covered it, but there
might be other indicators thattie it in.
There's a more concrete thingthat is also happening, though,
which is that, in order to trainthis again, we mentioned about
needing a lot of data.
There is a danger when yougather lots of data into one

(30:20):
spot.
If it's sensitive data, and youhave some person with malintent
who wants to compromise thatinformation, whether it's to
modify it or steal it and you'vegathered all that information
into one spot, that becomes avery juicy target.

Speaker 3 (30:38):
Yeah.

Speaker 1 (30:39):
So there's and I mentioned before about a lot of
companies being scared thatthey're going to fall behind
yeah, so there's a pressure now,there's an economic pressure
for companies to not alwaysprotect those systems in the way
they historically would have,because they're afraid of losing
their competitive advantage.
Losing their competitiveadvantage.

(31:05):
So in the short to medium termthat's one of the fears that I
have is that information beingmanipulated or tampered with or
stolen.
Stolen is the worst one.
But in the long term, I thinkpart of it is.
If I were to drive a fear for along term, one would be us as a
society not learning how tocope with AI taking over a lot

(31:30):
of roles.
Like you look at how oureconomic system is set up, like
we encourage people to go findjobs, suppose that AI has become
really amazingly good and theycan take on most, most jobs for
a moment just as like a thoughtexperiment sure um, one of the
things that I think is ishealthy for us is for us to

(31:50):
engage in tasks that challengeus.
It could be a martial art, whereyou're challenging your, your
body, and you're challengingyour, your mind, to work through
a problem of how do I, how do Ido this in a safe way and get
the outcome that I want, or youknow, or other sports are
similar.

(32:10):
It could be something like art,like how do I, how do I create
something that is unique andbeautiful, or something that
really speaks, that speaks tomyself and to others, and I
think part of that adaptation ishow do we make sure that people
can still be engaged inmeaningful endeavors?
And in a world where perhapsthe ai is taking care of most of

(32:33):
the of the work, that, thenthat means we have to find, as a
society, find ways to to keepourselves busy so that we just
from a from a mental healthperspective, so that's good.

Speaker 3 (32:47):
So, yeah, that's.
That's really helpful.
I like.
I like your perspective andit's good to know that somebody
like you isn't kind of in themix on all of this stuff.
That's great, um.

Speaker 1 (32:57):
I don't know what I can do about it.
I, I, I try, I know, I know,but but still.

Speaker 3 (33:01):
I mean I'm sure there's a lot more people that
have similar perspectives or atleast have a conscience toward
those sorts of things as well.

Speaker 1 (33:07):
I hope so, because if the opposite side is, we don't
is the worst case scenario wouldbe as a society, we have this
happen and there's it's likewell, you're not doing anything
meaningful, so you're not goingto get enough money to to
survive and you can'tout-compete the AI because it's
too cheap.

Speaker 2 (33:26):
Sure sure.

Speaker 1 (33:29):
And so now you have this issue of how do you even
feed yourself?
Plus, how do you keep yourselfengaged when you can't even get
the basics down?

Speaker 3 (33:39):
Yeah, that's great.

Speaker 1 (33:40):
My hope is that, again, like I said, I'm an
optimistic person.
I don't think that'll be whathappens.

Speaker 2 (33:45):
Right right right.

Speaker 1 (33:47):
But we can take steps in order to help that, and I
don't think the step isnecessarily stop AI from
becoming a thing.

Speaker 2 (33:54):
I don't think that's going to work.

Speaker 3 (33:55):
Yeah.

Speaker 1 (33:55):
I think, if our best path towards this is again as a
society, how do we make surethat we're in a in a spot where,
as these things start to tocome around, that uh, people can
still provide for theirfamilies and still find ways to
to find things that they reallyenjoy and in order to keep
themselves in uh, in order tokeep their minds active.

Speaker 3 (34:17):
That's great.
That's great.
I love it.
Frederick, thanks for joiningme.
I'm going to wrap it up here,but before we go, what are some
of the places that you loveabout Fremont?
What are the things that you dohere?
I know that you like Devoutbecause I see you there often
and it's always good to run intoyou.
What are some of the otherthings that you enjoy about
Fremont?

Speaker 1 (34:36):
Definitely we are doing better on the coffee than
we used to, Because I rememberwhen Deval was first opening was
Niles Flea Market.
They did like a soft launchwalked in it was like very, very
front of the shop was like itand there was like a wood panel.
That's right and I was soexcited because it was like we

(34:58):
did not not to say there were nogood coffee shops here, but
there was a lack of variety.

Speaker 3 (35:04):
Yeah, yeah.

Speaker 1 (35:05):
And I think I do like some of the things around
nature that we do have over hereI think it's something that a
lot of people they don't quiterealize that you have some of
the, the some of the parks wehave, the some of the nature
preserves that are here andthey're they're underutilized

(35:28):
absolutely in many scenarios.
Yeah, for sure, yeah I also inniles in particular.
I also really enjoy thecommunity here.
I really enjoy the uh, the waypeople are like.
When I first moved over here, Iended up going to Mr Mikey's

(35:49):
and all my stuff was still instorage, though, and I needed
like a screwdriver.
So I go in to buy.
It's like maybe they'll sell ascrewdriver, and so I walk in
hey, do you sell one?
Oh, no, here, just borrow mine.

Speaker 3 (36:00):
Oh, my word, that's cool.
It's like.

Speaker 1 (36:02):
I knew I was in a good spot.

Speaker 3 (36:03):
Yeah, that's awesome, I love that.

Speaker 1 (36:05):
And to give juxtaposition, when I was in
Palo Alto because I lived indowntown Palo Alto for a while
and I had neighbors there likeliterally lived in the place
next door that a year and halftwo years never said hello once.
If you say hello, they likewalk faster, and and I, and I

(36:26):
think, and, and I, I don't wantto give them too much of a hard
time and not say that everyonefollow all it's like there's
some amazing people over theretoo, but I think when people are
, they're they're worried abouttheir future, they're worried
about their career they are.
They're in a position where theyfeel they have to work
incredibly hard in order to, inorder to make it Um.
My guess is maybe these peopledidn't feel like they have space

(36:50):
for anything else other than uh, are you useful to me from a
career perspective?
If not, then uh, it's actuallya risk because I won't do the
things I need to do in order toin order to succeed.

Speaker 3 (37:04):
yeah, it's just a guess like yeah for sure.
Yeah, I mean there's a lot ofdifferent reasons why people do
what they do, but I do loveniles, I love uh fremont and
love the um.
I've experienced a lot offriendliness as well, and I
think that's one of the thingsthat I want to highlight through
the podcast is just that thereare a lot of great people here
and there are a lot of peoplethat are truly genuinely

(37:24):
interested in getting to knowyou, and they've got great
stories to get to know, so weshould take the time to do that.

Speaker 1 (37:30):
I don't think there is a moment.
Every time I've gone walkingout in Niles and very often I
end up talking with a lot ofpeople and I don't think I've
ever come across a time where Ididn't learn something
interesting, or or it's likepeople are have always been
friendly over here, so I'm, I'mthat's cool.
It's one of the things that Ireally love about this, uh,

(37:50):
about this particular place.

Speaker 3 (37:51):
That's awesome.
Well, frederick, thank you somuch and look forward to sharing
uh this conversation with ourcommunity and, um, I appreciate
you taking the time to be on it.

Speaker 1 (38:00):
Well, thank you very much.

Speaker 3 (38:01):
Yeah, for sure.

Speaker 2 (38:05):
This episode was hosted and produced by Ricky B.
I'm Gary Williams, andrew Kvetis the editor.
Scheduling and pre-interviewsby Sarah S.
Be sure to subscribe whereverit is that you listen, so you
don't miss an episode.
You can find everything we makethe podcast and all of our
social media links atthefremontpodcastcom, so you

(38:27):
don't miss an episode.
You can find everything we makethe podcast and all of our
social media links atthefremontpodcastcom.

Speaker 3 (38:31):
Join us next week on the Fremont Podcast.
This is a Muggins Media Podcast.
Advertise With Us

Popular Podcasts

Good Game with Sarah Spain

Good Game with Sarah Spain

Good Game is your one-stop shop for the biggest stories in women’s sports. Every day, host Sarah Spain gives you the stories, stakes, stars and stats to keep up with your favorite women’s teams, leagues and athletes. Through thoughtful insight, witty banter, and an all around good time, Sarah and friends break down the latest news, talk about the games you can’t miss, and debate the issues of the day. Don’t miss interviews with the people of the moment, whether they be athletes, coaches, reporters, or celebrity fans.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations.

Crime Junkie

Crime Junkie

If you can never get enough true crime... Congratulations, you’ve found your people.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.