All Episodes

September 12, 2023 28 mins
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Aaron Pritz (00:08):
Welcome back to Simply Solving Cyber.
I'm Aaron Pritz and I'm CodyRivers.
And today we're here with theelusive Tim Sewell, the CTO and
co-founder of Reveal Risk.
My co-founder and businesspartner here at the company.

Cody Rivers (00:21):
Very elusive.

Aaron Pritz (00:22):
Tim, how are you doing today?

Tim Sewell (00:23):
I am thrilled to be here finally.
Yeah, it has been a long timecoming.

Aaron Pritz (00:26):
We'll get into, why you were Mr.
Elusive on this specific show,nothing else, here in a bit.
But let's start, uh, how wealways do by giving the audience
a chance to get to know you alittle bit more so.
Start with telling us how yougot into cyber security and
maybe your defining moment ofwhen you knew it was your
permanent spot.

Tim Sewell (00:47):
Yeah.
So I've been in cyber for a longtime.
It's always been an area that'sfascinated me all the way back
to when I was a kid for my ninthbirthday.
I think about the kind ofpresence that you would get
maybe a Nintendo or some rollerskates or something.
I got the orange book.
Which is the TC SEC Guide toTrusted Systems Evaluation

(01:08):
Criteria.

Aaron Pritz (01:09):
So page Turner there.

Tim Sewell (01:10):
Oh, it's kind of known as the hacker Bible from
the late eighties, earlynineties.
Very hard book to get, at thatpoint in time.

Cody Rivers (01:17):
Hardback or paper?

Tim Sewell (01:18):
It's paper.
And it is a very bright orangecover.

Cody Rivers (01:21):
Okay, excellent.

Tim Sewell (01:22):
First edition, orange book.

Aaron Pritz (01:23):
Don't want to lose it.

Tim Sewell (01:24):
There's a whole series of them, the rainbow
series of books and computersecurity, but the orange book is
the kind of the corefoundational.

Cody Rivers (01:30):
Interesting.
Okay.

Tim Sewell (01:33):
And, how did I know that I wanted to do cyber as a
passion kind of for my career,for my life?
Um, I do love this stuff, uh,but it really started out with
the video games.
So early on you could makemodifications to the game and,
suddenly your characters wouldno longer die from dysentery in

(01:53):
Oregon.

Aaron Pritz (01:53):
What kind of video?
Oh, Oregon Trail.
Okay.
Nice.
Old school.
Apple IIe?

Tim Sewell (01:59):
Yeah.
Yeah.
Apple IIe early, early PackardBell.
Good times.

Aaron Pritz (02:03):
Nice.
So then tell us about yourjourney through cyber.
So it started with games andorange book and kind of being
interested in the topic.
Where did you start and how didyou progress?

Cody Rivers (02:14):
And it wasn't called cyber then.

Tim Sewell (02:16):
Oh no, no.
It was still a called computersecurity or information
assurance, which is kind of likewhat the government term for it
was.
Uh, and they tried to extend itbeyond.
Yeah, digital informationactually talk about protecting
paper copies of information soinformation assurance.

Cody Rivers (02:32):
You're doing this thing and you're liking it and
say, this is cool.
I like doing this.
And then how's it, how's thestory go?

Tim Sewell (02:37):
Yeah.
So I kept pursuing that.
I went to undergrad for computerscience and focused on
information assurance and gotright into defense.
So I literally was working whilecyber was still underground in
the underground at a U.
S.
Stratcom for a while and

Cody Rivers (02:56):
How far underground?

Tim Sewell (02:58):
That's classified.

Aaron Pritz (03:01):
You take an elevator or a ladder or spirals
staircase.

Cody Rivers (03:05):
Firepole would be way more fun, man.

Tim Sewell (03:08):
Firepole would have been cool, but no, it's actually
a series of ramps that go downbecause they have to be able to
take like truckloads of fooddown there.
When they close the door, theyhave to be able to survive for
like two years.

Aaron Pritz (03:19):
Now I'm thinking of the show Silo.

Cody Rivers (03:21):
I was thinking of Interstellar, when they're going
on its side and going around.

Aaron Pritz (03:26):
Okay.
Well, what do you dounderground?
What did you do underground?
How did you get out?
Most importantly,

Tim Sewell (03:32):
Getting out was actually kind of interesting.
Underground, I was theaccreditation lead for a bunch
of systems, which means I wasthe guy responsible for making
sure all of the paperwork was inorder that the systems had been
properly tested, configured,installed, were adhering to all
the security best practices andso forth.
And that process was very,documentation intensive and

(03:54):
typically took a very long time.

Aaron Pritz (03:56):
Sounds not fun.

Tim Sewell (03:57):
Well, you know, it wasn't.
Uh, so I applied some of my,computer science background.
I thought, Oh, I bet I canautomate some of this.
So I wrote a bunch of macrosbecause at this point in time
you could still have macros in,in word documents that hadn't
become, you know, criminalvehicle of choice and I turned
what used to be a multi monthprocess into days to weeks,

(04:23):
which I thought was fantastic.
And my customer thought wasfantastic.
And my boss said, well, that'sgreat.
You've just automated yourselfout of a job.
But it worked out'cause I got togo to California.
They sent me out there to workon some interesting stuff that's
still in orbit I think.

(04:44):
And, I got to continue to applymy talents and interests and do
really cool stuff in cybersecurity for a long time in the
aerospace defense andintelligence community.
They were heavily focused oncyber, and so it was a great
time to be there.
And as I was doing that, westarted to see banks and other
commercial entities become moreconcerned about their technical

(05:06):
security with the rise of theinternet and e commerce and
everything becoming connected.

Cody Rivers (05:10):
So at this point, was it starting to become called
cyber?
I mean, were we now in thecontinuum of the development of
cyber on the commercial stage?

Tim Sewell (05:17):
Yeah, it was starting.
We had a lot of debates about,was it still information
security?
Was it cybersecurity?
Were those two things differentwhere it was information
assurance still in here?
You had folks talking aboutelectronic warfare broadly,
which includes things likesignals, intelligence, and radar
jamming.
But, the terms of evolved overthe years, and they're going to

(05:40):
continue to evolve because wekeep learning more and more.
You know, when I started out insecurity, you could learn all of
security.
You took four or five years ofdedicated study.
You take things like compilerconstruction, network security,
a little bit of maybe identitymanagement, some application
development, and you couldcredibly call yourself a

(06:02):
security person because you knewall of computer security.
It's impossible to do that now.
The field has exploded.
There are so many disciplinesand sub disciplines and so many
different applications of allthese different technologies
that no one person can keep upwith any of it.

Cody Rivers (06:18):
So many channels, so many different mediums to get
to and fro.

Tim Sewell (06:21):
Exactly, exactly.

Aaron Pritz (06:23):
So then post defense, where did your career
take you after that?

Tim Sewell (06:27):
Yeah.
So I got tired of making thingsthat blew up people.
Started thinking I should dothings that help put people back
together.
So I went up to a littlehospital in Minnesota called the
Mayo Clinic.
Work there for a few years andhelp them do some really
interesting things in terms oftransforming the way that
healthcare approaches networksecurity and medical device

(06:49):
security.
They do some fantastic work upthere and I was really proud and
privileged to be a part of it.

Cody Rivers (06:56):
That's excellent.

Aaron Pritz (06:57):
And then we got to work together in pharmaceutical
industry.

Tim Sewell (07:00):
Yeah.
Yeah.
So the one challenge withRochester, Minnesota is it gets
quite cold.
So, you know, one day it was 52degrees below zero.
I got a call from a recruiterthat said, hey, have you heard
of Eli Lilly?
I said, yeah, that's south ofhere.
We should talk.
So I moved here to Indiana, uh,it was about 2016 and, worked at

(07:21):
Eli Lilly for a little whilebefore leaving in 2018 to co
found Reveal Risk here withAaron.

Cody Rivers (07:28):
I never asked, how'd you guys first meet at
Lilly?
Was it over like a lunchcafeteria cross or did he have a
pudding snack that looked good?
Do you want to share that?

Aaron Pritz (07:35):
Probably in a meeting, but I think we did have
some lunches and coffees.

Tim Sewell (07:38):
We did.
I think I had 20-30 lunges withpeople in my first month that
Lily, uh, they did actually areally good job at making sure
that I got to meet many of mypeers and folks that I would
need to work with, uh, in myrole there as part of my

(07:59):
onboarding.

Cody Rivers (07:59):
Awesome.

Aaron Pritz (08:00):
So I guess back to the question of your defining
moment, as you went through, wasthere a point in which you knew?
Hey, you weren't going to pivotto something else or cyber was
kind of where you, I mean, youwere passionate from the
beginning, but what was thatdefining moment?

Cody Rivers (08:16):
It's a good question.

Tim Sewell (08:18):
It's almost like I've just always assumed it.
It's been such a core part ofwho I am and who I've always
been.
I don't know that I consideredmuch else.
I mean, I've always likedbusiness too, so this is, this
has been a good.
Good decision over here.

Cody Rivers (08:37):
Kinda nailed of the two things.

Aaron Pritz (08:39):
Nice.
Well, as I alluded to when wekicked off, you have been a
little bit elusive on thepodcast front, uh, for reasons
we won't get into, but let'sjust say important things have
always come up.
Uh, client, which, clientsalways come first, but, every
time we call it the Tim Curse,when we try to do a podcast with
Tim, something interesting comesup and, uh, no excuses today
you're here, but Gotcha.

(09:00):
We did actually prepare Cody andI.
We've been doing a lot in AI andwe created a AI based deep fake
Tim, which I think, I we didn'thave to use it today, but I
think we should introduce him.
What do you think?

Cody Rivers (09:12):
You know what?
I think it's nice to have andsay a few words.
Okay, well, let's let's do that.

Tim (09:17):
Hi everyone! I am Tim-bot.
The A.I.
robotic clone of Tim Sewell...
and see, I even misprounouncedmy own last name like many
people that real Tim meets.
Thats ok.
I like many things in cyber inlife.
Networking with hundreds ofpeople per day and putting
myself out there playing guitarand singing in my barbershop
quartet.
Oh wait, none of that is trueand I have indeed hacked real
Tim.

Tim Sewell (09:36):
Okay.
Well, you know, a little flat,but, uh, you know, imitation is
the most sincere form offlattery.
So thanks a lot, guys.

Aaron Pritz (09:44):
The intonation a little off, but the voice, I
definitely can tell that that'sTim.
All right.
Well, on the topic of AI, Tim,you've been doing AI before the,
recent resurgence of AI through,Chat GTP or what we know now.
It's every app has a an AI bolton.
But, give us your thoughts.

(10:04):
What's been AI to you and cyberand, uh, maybe pre and post the
commercialization of it hererecently.
What's that landscape?

Tim Sewell (10:13):
Yeah.
So when I was working with whatI would consider early AI in
cyber back in the mid twothousands and late two
thousands.
The idea was, how can weleverage technology better to
help the computer defendersreally do their job?
How can it be a forcemultiplier?

(10:35):
The challenge in cyber is oftenso much data, so few analysts,
such a niche set of expertise.
We have to have better tools tohelp us find the needles in the
stacks of needles that arehidden under haystacks.
So it's been a force multiplierfor the cybersecurity industry

(10:58):
for a long time.
What we've seen recently is AIbecoming accessible much more
broadly.
Tools like Chat GPT, the new AIart tools like Mid Journey.
You've got general users makingthings with AI and it's become

(11:22):
really a flashpoint for cultureat this point.

Cody Rivers (11:25):
Definitely a buzzword now.

Tim Sewell (11:26):
And it is everywhere.
So of course in the computersecurity realm, you've got
attackers, you've got defenders,the attackers love AI generative
AI helps write amazing targetedfish.
It helps create self mutatingmalware.

(11:46):
It will help write, it'llanalyze code and find flaws and
then it will help you writeexploits for those flaws.
It's a tremendous performanceboost for adversary community on
the defender side.
Similarly, we can use generativeAI to write better awareness
content.
We can use it to more quicklyanalyze large amounts of data to

(12:09):
discover the anomalies that arecaused by bad behavior.
So it's a little bit of an armsrace.
The attackers will figure outsomething.
Then the defenders will figureout a counter.
The defenders will figure outsome really cool detection.
The adversaries will find anevasion.

Cody Rivers (12:25):
Using the same tool is kind of wild.
It's like this double edgedsword.

Aaron Pritz (12:29):
Absolutely.
Is it just me or like until thelast year or so, maybe less than
a year, AI had been overused asa marketing jargon term, like
everything has AI and reallynothing reflected that it had
AI.
So, again, I was kind of writingit off, but you were doing
stuff, maybe AI before it wascalled AI.

(12:51):
Talk to us about what you reallyconsider AI and then what's the
shell hype that a marketer mightput on something to create more
intrigue about their product.

Tim Sewell (13:01):
Yeah, so it's maybe it's easier to answer the second
half of that question in what iskind of called AI.
Anything that the computer doesfor you, somebody is going to
call it AI.
It's"Oh, the computer figuredout this pattern for me." Well,
okay, you put some rules andsome conditional if statements
if then else some logic stuffand it came out with an answer.

(13:25):
And anytime you put that inputin, you're going to get that
answer out.
It's not really AI.

Cody Rivers (13:32):
Thinking it's just executing a set of steps.

Tim Sewell (13:35):
To me it becomes AI when that output becomes less
deterministic.
So Chat GPT is a great exampleof this.
You go to Chat GPT and you giveit a prompt and you ask it a
question.
It'll come back with an answer.
And even the bottom of thatanswer is going to hit
regenerate and you give it theexact same prompt, it'll come
back with a completely differentanswer.

Aaron Pritz (13:55):
There's not a defined set of multiple- choice
answers, right?

Tim Sewell (13:59):
Exactly.
There's not a defined outputbased on a specific input.
And that's what really makes itkind of feel magical, right?
It feels like the computer isdoing some thinking for you.
It's still doing clusteranalysis and grouping, and it's
putting these words togetherbecause it sees these words
together a lot in its corpus oftraining data.

(14:22):
So it's still a computer, it'snot really thinking, but it
feels a lot more like it is nowbecause you're getting these
non- deterministic outputs fromyour input.

Aaron Pritz (14:32):
Do you think AI will ever be an, in the near
term sentient, I know there'sbeen people claiming that it
already is.
What are your thoughts there,

Cody Rivers (14:41):
Jarvis?

Tim Sewell (14:42):
I think it's an interesting philosophical point
and one that we're going to haveto struggle with as a society
for a while because I don'tthink we have a really good
definition of sentience.
I think this is going to forceus to create one and then we'll
have to decide if.
Our computer AI reaches thatthreshold or not.

Aaron Pritz (15:00):
Makes sense.
Well, we've had a couplediscussions on AI.
It's really hot right now.
It's hard to not avoid it.
But maybe let's pivot for thisdiscussion a little bit more
into deep fakes.
And Tim, you and I were out atDEFCON and Black Hat and,
specifically at DEFCON, we saw atalk on that with a live
demonstration.
Do you want to talk about that,and then we can unpack kind of

(15:21):
where that leads us to thinkthat the deepfake society or
space will go from here?

Tim Sewell (15:30):
Yeah, so the presentation that we went to was
a live demo, and basicallypulled together a few different
open source tools to create amodel of an individual,
including their visual likenessand their voice using

(15:53):
surprisingly small amounts ofsource content.

Aaron Pritz (15:56):
Yep.
So it was the guy's real CEO,right?
And that was one of the ones hemocked up.

Tim Sewell (16:01):
Yes.
Yes.
One of, one of them was his realCEO and incredibly convincing.

Aaron Pritz (16:07):
Didn't demo with that one per for probably good
career reasons.

Cody Rivers (16:10):
Man.
So they're taking AI andcreating real like images and
people and video.

Tim Sewell (16:15):
Yeah.
And the cool part, or maybe thescary part for this one is
during the live session, he'sstanding up there on stage and
you can see it's him and you seethe camera on him and he starts
to run his script and you seethe camera image transform into,
uh, Jeff Moss, who is thefounder of Black Hat and DEFCON

(16:38):
and, very well known figure inthat community, but...

Cody Rivers (16:41):
That's wild.

Aaron Pritz (16:42):
Not only looked like, but he got the audio
replica in real time.

Cody Rivers (16:46):
So how do you, man, that's wild.
So then how, what are somethings out there to detect?
What things can AI not do yet?

Tim Sewell (16:54):
Yeah.
So as impressive as the realtime capability is there, there
are still some limitations.
We'll see how long theselimitations last, but,
side-profile views.
So if somebody turns to the sideon camera, oftentimes that will
confuse the model.
It will blur or it will havesome sort of distortion or

(17:15):
glitch.
Similarly, nuanced facialexpressions can be a challenge.
Somebody will be grinning andtheir cheeks will still drop
because the model is not wellarticulated that way.
As far as vocal?
You can find some odd intonationfor things like laughter, things

(17:36):
that are not necessarily spokenwords.
You might not get a lot of goodsamples to create a vocal model
out of.

Cody Rivers (17:43):
What about like interacting, so if it's like on
a phone call or, or maybe it'seven to your point, via prompt,
which is like you see onLinkedIn nowadays with a lot of
that stuff there, what are somelike interaction flags?

Tim Sewell (17:54):
Yeah.
So how do you know you'redealing with something that's
generated by an AI?
AIs don't get jokes, so they'renot very good at humor.
Okay.
And they sometimes drop a lot ofcontext.
Again, they're finding groupingsof words or groupings of
concepts or ideas that they seefrequently in their training

(18:14):
data.
Yep.
And so if you're askingquestions in a bit of a
roundabout way, it'll get someweird clustering, some weird...
Weird responses.

Cody Rivers (18:22):
No, no humor.
A little bland.
He sounds like I dated thisperson in college.

Aaron Pritz (18:28):
Didn't we all?
Yeah.
My mind goes to socialengineering, especially with the
real time.
Cause Cody, you were askingabout fully generated like a
person that doesn't exist sayingthings.
If I've got a camera trained onme using some of the same
technology that was demo andreleased open source, to the
broader community, I could callyou on Teams or Zoom, emulate

(18:50):
whoever I trained the model toemulate and have a conversation
posing.
The other thing I was thinkingon the glitching, and this just
makes it harder.
You think about Teams and Zoom.
The background blur, or the artbackgrounds.
When you turn to the side, youget glitching normally, so I
think that's almost a mask thatmakes it even harder to detect

(19:10):
on these video call platforms.
So Tim thoughts on socialengineering?
Do you think that the attackersare already jumping in with
this?
Do you think it's early?
Where do we think we are withthis?

Tim Sewell (19:22):
Oh yeah.
I think we've already seenexamples of these techniques
being used to create a bigpolitical debates and arguments.
You'll see politicians makingstatements that they didn't
actually make, but it's almostimpossible to tell.

Cody Rivers (19:40):
Yeah.

Tim Sewell (19:41):
And we've also seen cases where executives have been
impersonated via video chatchannels telling.
They're staffed to do thingsthat they wouldn't ordinarily
do, but Hey, it's my boss onvideo telling me to do this.
I guess I'm going to go buythose gift cards after all.

Cody Rivers (20:01):
Yeah.
He's on video telling me this ispretty real.

Aaron Pritz (20:04):
Yeah, gift cards are never a real request until
they are right.

Tim Sewell (20:08):
I was gonna say, didn't you ask me to buy some
gift cards?

Aaron Pritz (20:10):
I think for a conference or something.
So touché touché.
On the election front or reallyany big public debate.
Like then we get into influencecampaigns and trying to sway
perceptions.
Obviously already seen cases ofthat, or, negative ads that show
things that were faked orwhatnot.
Are we going to see more ofthis?
Is this an unfortunate statusquo?

(20:32):
Or do you think there's going tobe some regulation to try to
prevent that?
So people have decision makingcapability based upon reality?

Tim Sewell (20:40):
I think we're going to see it almost unending stream
of it and figuring out what totrust is now a very hard, it's
very hard problem now becauseit's so easy to create
compelling, realistic, fakecontent.
and there are a lot of socialinstitutions that are

(21:02):
susceptible to that kind of anattack or that kind of a threat.

Cody Rivers (21:06):
Yeah.

Tim Sewell (21:07):
Elections obviously one that is terrifying really.
And you think about how socialmedia can influence an election
or a public policy conversation.
Then you apply the ability tofake what your opponent says or

(21:29):
does in a compelling way in asociety that's already
challenged with fact checking.

Aaron Pritz (21:37):
You mean fake like this?

Tim (21:39):
Hi!!!! I'm back everyone! I just want to put in a plug for
the 2024 presidential campaignof Statler and Waldorf.
I love those muppets.
They have great coaching andfeedback and tell the truth
without backing down.

Tim Sewell (21:50):
That was a little too good.

Aaron Pritz (21:53):
So I guess maybe turning to the future.
What recommendations around AIdo you have for cyber leaders
and then maybe company ownersand executives two part
question?
Because the remit or the angleis different.
But we've got listeners that arecyber professionals and those
that might be small business orexecutives of companies.

(22:14):
Like what?
What do people need to befocused on now to get ahead of
this or to deal with it?

Tim Sewell (22:19):
Yeah, there are a lot of policy and legal
questions around AI andintellectual property Those are
going to take years to workthrough various legal channels
to get to resolution.
And I am not a lawyer, so Idon't feel too qualified to

(22:40):
speak on them other than to saythat they exist and they are
real and they will causechallenges for all
organizations, either theorganizations trying to use AI
or trying to prevent the use ofAI for something or the other.
I think from a technicalperspective to use AI safely,
you've got to think about waysthat AI can be attacked in and

(23:01):
of itself and how you fend theAI in and of itself.

Cody Rivers (23:05):
Yeah.

Tim Sewell (23:06):
So poison training data is a real threat.
If you can give the model baddata, it's going to learn bad
data.
Similarly, if you can controlthe inputs to the model or
change the inputs to the model,it will generate bad outputs.

(23:28):
So if you can control howsomebody is interacting with
their AI and change words orchange prompts or change things
you can force the AI to dothings it's not supposed to do.

Cody Rivers (23:38):
Kind of because you're feeding the library of
options.
The salt or the thing you'reputting in there that's not
right will come out because itdoesn't know.

Tim Sewell (23:47):
Exactly.

Cody Rivers (23:48):
Gotcha.
So man, is there hope for thefuture?
I hear a lot of things, how AIcan do that are nefarious, but I
think there's a lot of greatthings, but talking about maybe
the hope for the future.

Tim Sewell (23:59):
Yeah.
So we're in the infancy ofgeneral computing AI.
Similar to the launch of theinternet or networking of
computers.
Very early on, there wasMicrosoft Mecca.
A lot of concern about that too.
What is that going to mean?
How are we going to handle that?
And I think this is a similarperiod of societal inflection as

(24:21):
we adopt more AI solutions.
It's hard to tell.
I like to look to sciencefiction because I think it does
actually a pretty good job atgiving you a vision of at least
potential futures.
So I like to be a bit of anoptimist and look to a Star
Trek-like future where AIremoves a lot of the drudgery

(24:45):
from human existence and freesup people to pursue arts and-

Cody Rivers (24:52):
The nuance versus the kind of mundane?

Tim Sewell (24:55):
Pursue what I would call the truly human endeavors,
get rid of all the drudgery.
Of course, science fiction willalso tell you that the AI, as
soon as it becomes sentient, isjust going to launch all the
nukes and kill us all.
So that would be"The Terminator"version.

Cody Rivers (25:10):
There we go.

Aaron Pritz (25:11):
Where does"Minority Report" fit in?
It's kind of dystopian.

Cody Rivers (25:14):
That's the precogs.
That's actually-

Tim Sewell (25:16):
That's in the future, which is AI can
ostensibly do that, right?
I mean, there are people saying"Hey, if I feed all the
financial data into an AI, itshould be able to predict the
stock market, right?"

Cody Rivers (25:26):
That was the precogs, man.
That wasn't, that was the littlehuman people thing.
So, speaking of AI, what isprobably your favorite science
fiction AI?
I mean, I think of Knight Rider,you got Terminator, you got
Jarvis from Marvel, MarvelComics, the"Iron Man."

Tim Sewell (25:42):
I'd probably have to go with Data.

Cody Rivers (25:44):
Yeah, that's a good, that's a good one.

Tim Sewell (25:45):
Data the android from"Star Trek
Generation." Probably myfavorite.
I think about some other onesthat I really enjoyed that were
maybe a little more nefarious.
There's a book by Daniel Suarezcalled"Damon" that has a really
interesting AI component.
I do find a lot of the sciencefiction books about AI really

(26:08):
are trying to be warnings abouthow AI could be abused.
And then I find organizations orcompanies or, venture
capitalists, they grab thosesame books and they say, Hey, we
should go build this.
Yeah.
It's like, no, did you read thebook that said that was a bad
idea?
We shouldn't do that.
Uh,"I, Robot" is a good exampleof that.
You know, you have the threelaws of robotics and how that

(26:29):
can ultimately lead to, um-

Cody Rivers (26:33):
Now you're asking the right questions.
There you go.

Tim Sewell (26:36):
Exactly.

Aaron Pritz (26:36):
Cody, I'm going to use the question you asked
Shelly, but what's a fun fact orsomething that no one knows
about you?
Very few people know about you?
Fun fact, hobby or Tim- ismthat, that you can unveil here
today.
We missed this one in prep.
So apologies in advance.
Kristen, here's the edit part.

Cody Rivers (27:02):
Did you meet any school celebrity on a wild
happenstance?

Tim Sewell (27:06):
You know, I did meet Jane Goodall once we were on a
flight together fromPhiladelphia to San Francisco.
I just happened to get seatednext to each other and I look
over and go, are you JaneGoodall?
Yes, yes, I am.
We had a lovely conversation.
She's a charming woman.

Cody Rivers (27:23):
That was from Philadelphia-San, that's a long
flight.

Tim Sewell (27:25):
That was a good long flight.

Cody Rivers (27:26):
Any good conversations you can share?

Tim Sewell (27:28):
We talked a lot about the Sand Hill Crane
migrations in western Nebraska.
I'm from Nebraska and she goesthere every year to watch the
migrations.
They really are quitespectacular.

Cody Rivers (27:37):
What a time, man, on this little flight.

Aaron Pritz (27:40):
Any closing thoughts, Tim, you want to leave
with our listeners?
What recommendations do you havefor them?
Kind of protecting their companygiven you've been doing that for
over 20 years.
What's your number onerecommendation for focus?

Tim Sewell (27:54):
I think it's still pretty consistent.
If it sounds too good to betrue, it probably is, especially
when you're looking at an AIsolution right now, there's a
tremendous amount of and there'sa lot of unknown.
So I would say continue to bevigilant.
Continue to do your duediligence, and don't believe

(28:14):
everything you see.

Aaron Pritz (28:15):
Good words.
Thanks, Tim, for joining theshow.
Have a good rest of the day andweekend.

Cody Rivers (28:19):
Yeah, thank you.
Glad we finally got you here.
May know it's probably thehardest 15 feet to get you, but
we got you for episode one, sowe appreciate this.

Tim Sewell (28:27):
All right.
It's been fun guys.
We'll do it again.

Aaron Pritz (28:29):
See ya.

Cody Rivers (28:29):
Bye.
Advertise With Us

Popular Podcasts

United States of Kennedy
Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.