Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
I think everybody saw, or atleast mostly everybody saw a couple of weeks
ago where there was a story thatcame out that the Army was testing a
drone and they were using AI tocontrol of the drone and they said,
nope, don't go attack THISUS.And it said nope, I'm going to
over the drone came back and saidnope, I'm going to overwrite you.
That's generative, right, That's that'sjust taking inputs and doing what it's told
(00:23):
and giving you a result. Activatefive. Just a tractor is too slow?
All right? Reality Check? Heythere everybody, and welcome to another
episode of Reality Check the Science ofFiction. Today, we have a very
(00:43):
exciting guest with you that I wantto share, Sean Harris. He is
a dedicated technologist and ultra geek andhe has over a decade of experience working
in the field of cloud architect DevOpsand engineering and he's worked with a lot
of complex software. I don't evenknow what all that means, but you're
(01:06):
going to explain all of this tous. He has a wonderful background in
digital forensics and incident response. Again, I have a strength background, and
that is something that Sean will explainto us a little bit more some of
the cool things that he's done.But you have your degree in in political
(01:26):
science with a minor in campaign management, and then I have my Masters of
Information Systems. So cool. Solet's just kind of jump right into things.
Today. We're going to be talkingabout the movie I Robot, and
that is one that I think everybody'sseen. I don't know if there's much
we really need to say about it. It's well for death and he fights
(01:49):
the robots and if you haven't seenit by now, spoiler alerts, we
will be talking about that movie.So Sean, let's just kind of jump
into it. Tell me some ofyour initial thoughts as just a film watcher,
and then some of your thoughts withthe movie I Robot as a science
geek. Yeah, so it's beena while since I've saw it. I
(02:12):
saw it when it first came out. I saw it because I really liked
the book by the same name byIsaac Asmov, and so I was really
looking forward to it. And I'vealways liked how Will Smith is such a
great actor. He's right, buthe's like the rock. He's so versatile
and can play any part that yougive him, and especially those intense drama
(02:36):
parts, like he's been in alot of drama movies and I just really
like how he does drama. Andso it's a great movie. I probably
should rewatch it. It's been afew years, but I know the book
well, it's one of my favorites. And so, yeah, it's futuristic,
it's crazy, it's science it's sciencefiction heavy, but it's an easy
and a good read. If youhaven't read the book, and I've only
(02:59):
seen the Phone, I recommend everybodygo watch it. But the film's really
good too. It was groundbreaking witha lot of digital animation and effects,
and I think it really set thestage for science fiction movies that came in
the later end of the mid twothousands. I would tend to agree with
you. I think it was,you know, as far as I remember,
(03:20):
it was kind of a first ofits kind. And here's a fun
little Will Smith fact. Did youknow that he was originally approached to play
Neo in The Matrix and he turnedit down so that he could play in
a wild, Wild west Man.That was a poor Curier decision that was
looking back, Yeah, and itsays it's one of his few regrets in
life. That would have been awesomehim and him and Lawrence fishburn Lawrence Fishburne
(03:46):
as as Oh, now you've gotme on the spot, but you know
where I'm going. That would havejust been an epic casting that would have
been just so much fun to watch. Yeah, maybe we could see that
come back as a redition, rebootit. You know, there's supposed to
be a million versions of Neo.So that's a topic for another time we
can go into. I mean,obviously the shows, this this podcast coming
(04:06):
to be talking about The Matrix atsome point. But one of my favorite,
one of my favorite movies, oneof the movies that I remember when
that movie came out and it TheMatrix and the sequels blew me Away,
and it really that movie right therereally changed how we do science fiction movies.
(04:27):
I think because of the groundbreaking TheChowsky sisters that really directed it really
just blew it out of the water. And everybody lambasted the other two in
the trilogy. But I just thinkthat when you get them all together,
(04:48):
when you look at it as awhole complex piece of art, it's just
amazing. It's definitely a very complextopic that I am looking forward to exploring
with perhaps a group of experts whenwe finally do get to that one.
But let's let's talk a little bitabout AI. It's such a hot topic
(05:08):
right now. People are scared,people are embracing it, and it really
seems like I can't scroll through myfeed without seeing either something about airplanes crashing
because all my technology knows that's mydeepest fear, or AI mine too well.
Don't don't Google, don't listen tous. It can never happen.
(05:30):
No, I think AI is avery complex and interesting topic. The AI
discussion around AI that we've seen overthe last year, it really has been
all about GENDERI, which is usingthose large language models to generate outputs based
on what you type, bend.I think the part that is scary that
we haven't dived into yet enough isthe AI they can generate. They can
(05:56):
think that can do the ghost passage, inerti AI, the inferential AI right
where it can start making decisions onzone, which is probably enough what I
robots all about, right, Andthat was Vicky, the centralized AI that
was controlling all the robots, andshe had the three laws, you know,
three laws safe for three laws strong. Whatever it was she made can
(06:18):
injure human can injure human? Numberone must obey the orders, and it
must. It must protect its ownexistence. And so if we look at
those three laws as we've laid themout, and the movie and the book
point out, two of those arein conflict with the other right because if
it must protect its own existence,but it can't injure a human, and
(06:40):
it can't, and it must obeythe orders. Something has to get right.
And so that's where we get intogenerative AI versus inferential AI and big
large data models that make complex decisionson behalf right. Everybody can go to
chatty and type in something and getthe spent out funny songs or something like
(07:01):
that, and all it's doing isnot scraping for new information or making a
decision for you. It's taking yourprompt and giving it and giving you back
what you think based on what youtyped in it. And it has a
cut off right like the newest M. The GPT four model cuts off in
September of twenty twenty one. TheGPT three five model cuts off a few
(07:25):
months behind that, but they're exponentiallybigger. AI grows on an exponential basis,
But we haven't gotten to the partwhere we have a real discussion about
the ethics of AI, the securityof AI, and the importance of not
letting it do no harm right,And I think that that's the scary part
that we should generated. AI isfun to play with and fun to watch,
(07:48):
and it's kind of interesting to kindof predict how it's going to take
over jobs, especially in any economysuch as the United States, where it's
all based on services as as opposedto generating actual output. AI makes a
I make some threats there, butI think the real discussion that we need
to have around the ethics of AIis around the securing of it, the
(08:13):
not leting it make complex decisions.I think everybody saw, or at least
mostly everybody saw a couple of weeksago where there was a story that came
out that the Army was testing adrone and they were using AI to control
the drone and they said, Nope, don't go attack this, and it
said, nope, I'm going toover the drone came back Nope, I'm
going to overwrite. That's generative.So that's just taking inputs and doing what
(08:39):
it's told and giving you a result. So we have our generative I generative
AI, which is like chat GPT, and then the other one is the
inferential inferential where it can infer whatyou want and make the decisions on itself.
Okay, because I've heard the chatGPT, someone explained it to me
that it's like the predictor on yourkeyboard with if you're typing and it's automatically
(09:01):
predicting. It's like that, justwith way more data points. Right.
Basically, what they've done is they'vegone out and scraped the entire Internet as
much as possible. They've done whatGoogle tried to do in nineteen ninety eight
never succeeded up. They went toall these websites, read at all these
big popular websites, Twitter, Read, you name it, it's probably in
there, and just sucked up allthe data and started pulling it and figured
(09:24):
out how humans speak. That's whyyou can have a conversation with it,
right, Like you can go intochat GPT's console and have a conversation with
it, and it's going to spitout mostly factual information right and until you
ask like cleaning the industry, rightuntil you ask it a question about say
who invented DNA or who found DNA. So I'm sitting without open on my
(09:46):
computer right now because I was,and I type in who discovered the double
helix? And I probably should besharing my screen for this, but it's
not. Two computers makes it hard. And so it talks about how James
Watson, how Watson and correct arethe ones that found the double helix.
(10:11):
Well, the helix was actually foundby Rosalind Franklin, so we raced her
contributions to society. And then theother thing that chat GPT does is it
remembers everything that you type in andmakes it part of its model. Right,
so if you go in and correctit with factual information, it's going
to start correct it. By thenature of how AI works, it's going
(10:33):
to start correcting itself. It's gonnago in and it's gonna say, hey,
I've seen a few people tell methat Rosalind Franklin is the one who
found DNA and made the discovery possibleand the double helix. So I'm going
to update that. And now I'llgive a caveat when I give my aunt
my original answer. So an infrastialAI is supposed to be the less concerning
(10:54):
one and it can still do thatsupposed to be supposed to be okay,
But eventual AI gets to a pointwhere what are we asking it to do?
We're asking chat GPT to print outcontent based on a quarry that we
ask. Inferential API or AI iswhere that decision making and that thought process
(11:18):
becomes part of the AI itself.Okay, So we're talking about like a
neural network. Yes, that's aneural net, right, And so that's
when it starts getting scales bigger,and it's exponentially bigger than even the large
language models that we use now.Right, So I think that that's really
where we need when we talk aboutAI. We talk about it is this
(11:39):
holistic, one big, one sizefits all approach, but it's really made
up of large models that do differentthings, right, Like, what's the
so because it's like I'm also I'veplayed around with the new Adobe Photoshala generative
image tool. So what is thedifference between AI versus just saying new technology?
(12:03):
Like what classifies something as AI specifically? That's a great question. I
don't think anybody really really knows theanswer for So I could make something and
just call it AI, and peoplewill go crazy. Well, I mean
it has like how do you defineAI? Let's ask the AI define yourself?
(12:24):
Please, how do you define AI? Because I knew I was going
to be typing things in So AIis so it says, AI is a
branch of computer science. Involves developmentof computer systems capable of performing tasks that
usually require human intelligence. Right,that's okay, that's the one sentence answer.
(12:48):
So narrow AI is where you're askingthe AI to to identify my voice
from yours or give me a recommendation. Like if I type into Google what
do I want for dinner? SomeAI is going to come back and says,
this is what you want for dinner. From some of the machines we've
had, let's say, in factorieswhere it differentiates between one product and another
(13:11):
for labeling, because that's all thoseare very simple, and those are all
that's the same concept. But thoseare all programmed and trained by humans instead
of the model, instead of theAI training itself based on an ingestion of
data. Got it. So AIyou can put data in and it continues
to learn, versus a machine thatyou program. You put the data in,
(13:33):
and that's the limit of the information. So like you look at a
welding machine at an Autobota factory.It only knows how to do that because
somebody has uploaded the plans for theautobody that needs to be welded. It
tells that the robot these are theexact points based on these like a video
game, based on these points ona graph, put a weld here and
(13:54):
do this. Well, somebody's toldto do that based on their engineering output.
AI would take that data and say, okay, I know how to
do this, so that when yougive me the next model, I already
know how I'm going to go outand well that car together or right,
that thing right, and so withoutit gets smarter the more data that it
takes in and the most where someof the ethics start to become exact.
(14:18):
Massive question in cyphersecurity kind of circlinginto what our topic is today, because
if you're looking at people hacking malwareand just even just people in putting negative
things over and over, which isgoing to happen because it's part of human
nature. I mean, you getyou know, kids on the internet just
putting out memes that are funny tothem, but it's overall toxic for humanity
(14:39):
in general. Well, and welook at happen. Look at Microsoft's first
foray into an AI chat bought afew years ago, where they had to
knock it offline within a day becauseit suddenly became a neo Nazi. And
that is the one that was goingdeep into the dark web, right,
No, No, it wasn't goingdeep dark web. It was just on
(15:01):
Facebook. It was the one thatpeople were that Twitter taught how to be
a racist in a matter of hours, and it was named I mean,
that's Twitter, right, But thiswas back in twenty sixteen, and so
they launched and within a few hoursthey had it basically responding with anti Semitic,
(15:26):
anti women, anti just all thehorrible things that you can think of.
The Twitter has become within a fewhours, they've trained that AI to
go in and basically reverse itself andbecome something that it was never intended to
be. Right, And as AIbecomes more prevalent, we have to look
at the people programming the AI models. It sounds like that that's just a
(15:48):
massive vulnerability with any AI in general, from something is the photoshop to chat
GBT, it's gonna because it reliesheasily on the entire human race to be
training it. That's that could bea really scary thought, exactly. And
that's the and that's when we talkabout ethics and AI. That's really what
we mean not only from not lettingbecome racist and a neo Nazi in a
(16:11):
few hours, but as we useit more for decision making. Right,
there are more and more platforms thatare reading resumes as you apply for jobs.
There is even AI based video interviewsoftware where you sit in front of
your computer and answer questions and theAI is judging you based on your response
and based on excuse me, youknow, I based on patterns. And
(16:37):
so the problem is if and they'vefound this and there's a great book and
I recommend everybody read it. It'scalled Weapons of Math Destruction, Weapons of
Math Description Math. This is destruction. I'm going to put an open an
Amazon link in the it's all about. It's all about these AI models that
have been trained by straight white computers. And so if a black woman applies
(17:03):
to these and these AI models areused to identify, they're going to promote
the white applicants over the black.Oh, because there's completely unaware of that,
right, Because there's an even thoughyou may not have a bias that
you're aware of, everybody has biasessomewhere, and if we don't catch those
biases, they make their way intothe am models. Not because somebody's going
(17:23):
I only want white men to gobe able to do things right, excuse
me, but because of the waythat people see the world. And so
this is why, as I've gonefurther in my career, I've really made
it a point to amplify the voicesand the careers of women, people of
color, people who are underrepresented inthe industry that I work in, because
(17:45):
I work in an industry dominated bywhite guy boring lass white guys. Well,
and that's really interesting because you seethe same issues showing up in science
in general. I was just recentlyat the National Strength Coaches Conference, and
some of the interesting things that wewere hearing about is the amount of peer
reviewed research studies that are done onwomen and most scientist I don't want to
(18:10):
say most, because I'm just lookingat my specific niche, but a lot
of scientific studies done in strengthing missioningit primarily done on men in their twenties.
We don't have great data sets onthe effects of strength and conditioning for
women or or populations that are aging, or populations that are disabled. So
that is a you know, it'snot necessarily a malicious bias. It's not
(18:33):
like these coaches are like, Ionly want to study young, but that's
who that's those are the populations thatare just showing up that are for some
of it. It's a bias.Is Oh, it's easier to study these
guys because they don't mend straight,So they just don't study the women because
there's a more complex variable with researchingthem. So not all that the biases
are malicious, they're just they stillexist, right, and it's just because
(18:59):
of our history. There's another greatbook called Medical Apartheid, which is about
black women being the subject of medicaltesting and ethics BAT and it's not so
much AI, but it talks aboutthe inherent differences to pla them and how
and how and even to this day, how black women are perceived by doctors
and how they've been trained, notbecause the doctor is a racist and hates
(19:19):
dealing with black people, but becauseof the inherent underlying biases that our culture
has created. And we're getting wayoff track from AI here, but there
are a whole bunch of things thatgo into how we build these models,
And these models are only as goodas what you feed them. And so
if we don't put guardrails in placethat protect AI from being able to emulate
(19:42):
people from being able to be tamperedwith, from bad information being inserted into
the model, and even worse,and we'll get into the cybersecurity aspect of
it and the scary stuff there.But if we don't put those guardrails around
it, AI is going to becomemore and more of a problem as we
rely on to make more and moredecisions in her name of life. Everything
(20:04):
from where you live when you fillout a rental application, to what credit
card interest rate you're going to getto, what carbone interest rate you're going
to get, to what schools you'regoing to get into is all starting to
be predicted and modeled back to byAI. And as as as this use
grows without some real ethicicity behind ethicity. But I don't know what word I'm
(20:29):
looking for. But without those yeah, those ethical guardrails, that's a better
way to say it. Without thoseethical guardrails in place, we're going to
have a harder time controlling what weare we ourselves have created. Because they
don't need they don't need to followexplicitly programmed boundaries like we've traditionally dealt with,
it will start making its own boundaries. So that kind of, you
(20:51):
know, kind of reminds me ofagain, Vicky with I robot. She
thought she was doing the best thingfor the human race. She realized that
humanity had become its own greatest problem. I think that's the same issue with
Skynet when it became self aware.Is the AI decides, my greatest task
is to protect humanity, but humanityis the biggest problem. So I'm going
(21:14):
to scale humanity back and start fromscratch. So these kind of you know,
all right fans, that's that's that'sI think that's an individual thought.
I'm still holding out for humanity.I think we've still hold out for humanity.
I don't I'm not ready to handmy all my decision making and planning,
a central planning over to some AImodel that I don't know what's in
(21:36):
there. I still hold that littlehole. But every day that I said
on social media, I started toHey, we're fixing the ozone, We're
fixing the ice caps. We cando it. I us, so,
you know, but if we dolook at these machines, is that the
kind of guardrails we're talking about isprotecting them from making decisions? I guess
(21:57):
my biggest question I'm getting two iscan machines make an ethics based decision?
Or is it always going to bea logic based decision? Well, ask
a question I don't think we anybodyknows the answer to you yet is as
these large AI models grow, whatare they capable of? I don't think
(22:18):
we know what they're capable of yet. I don't think we have any idea
of what we want them to becapable of. Right when we talk about
it. In relation to those threelaws that we mentioned that the movie hits
on, I think that that's oneof the great places to start, because
those three laws seemed bulletproof. Right. So I was watching a movie,
(22:41):
I'm like, oh yeah, solid, I was like the lady scientist,
you know. I was like,nothing can penetrate these laws. But here's
the problem with those laws. Lawsare gray. They're not black or white.
They're not a true false binary option. Right, So if somebody could
or forgets to patch or forgets toaddress a specific flaw, that AI has
(23:04):
to make a yes no decision,and that yes no decision could not be
the one you want. And wetalk about we brought it up earlier at
the beginning where we were talking aboutthe drone that was that overroaded its commands
and went and took out this fakemilitary base. And because even though it's
operator trolled on too, it's like, nope, I'm shutting it down.
I'm gonna do what I want.Turns out that story was overblown by the
(23:27):
media. Surprise, surprise. We'realso surprised, right. And secondly,
that's a real flaw, right,because what if the AI believes it's doing
something for the better good or somethingwithin the parameters it's been told to stay
within right a logic based but it'sbut it's such gray area. But it's
a gray area, right. Howmany times is your logic said no,
(23:49):
I shouldn't do this, and I'mgonna be hard, dead set against it,
and then you're like, yeah,maybe look at it. Well,
that's the interesting thing though, becausethat life is it makes that's one thing
that it makes human such special specilwho were special, is that we do
have emotions, and those emotions aresomething that leads to different decisions. I
(24:11):
adjust at the National Conference recently,we are attending a lecture based on decision
making and the presenter was talking aboutwhat goes into decision making for any person,
and one of the interesting things thatwas presented was for a human being,
not for technology. But when presentedwith the exact same information, humans
(24:32):
can still come to different conclusions.Right, And your values are different than
my values, right, And that'semotion driven. If we all were operating
off of pure logic, then wecould logic our way to the exact same
conclusion. So the machines, wecould logic them to do anything. I
could convince a machine to do anythingif my logic skills and debate skills were
(24:55):
good enough. But humans always havethat emotional factor, and I think it's
something that we see a lot inpolitics. You know, you love well,
I mean, I don't know ifanyone loves politics anymore, but it's
like it's a it's a topic ofinterest for you, and it's something we
see a lot of politicians. Theythey're very good at logic. They logic
their way to a conclusion, andthey logic the public into believing their conclusion.
(25:21):
Well, then they're look at twentytwenty, Right, we had an
election, we knew what it tooklonger to get the results that everybody needed
to declare themselves victory. But youstill have one group of people who are
convinced, like I didn't win onebecause that's the star, that's their grid
point. And so it becomes itbecomes emotional, and as you go attach
(25:45):
to something that those decision making processesbecome more and more illogical if you're looking
at it from a pure logic standpoint, right, brandly following a politician,
because I feel is totally within yourright if you've gone down that glide path.
On the other hand, math doesn'tlie, right. Math is a
(26:07):
universal law. Right, So atwhat point does a computer that's based in
this universal law that we call mathbe able to override the laws of mathematics
and do what it wants? Andif we allow those if we don't put
their rails in place, and wedon't have those ethical frameworks in place,
the ability for robots or AI systemsto interpret those laws and directives have significant
(26:33):
ethical implications and potentially unintended consequences.Yeah, because a robot could look at
you know, let's just say kittensand a humans like, oh kittens,
little baby cats. They're so cute, they're so sweet. I love them.
And the computer could make a logicbased decision and run through the statistics
(26:55):
and go, Okay, the majorityof people are allergic to kittens. They
grow up to be cats. They'renot good for society. Let's kill all
the kittens. You know, ashumans, we have the emotions. We're
like, no, don't kill.But we've asked the AI to control the
camp population. Yeah, if we'veasked it to control the issues with allergies,
and so it's now moderating off ofa pure logic decision with holes in
(27:18):
it, and now we have nowork catlift and then the mice population gets
out of control, and then wehave a new play. So it's there's
I mean, I could just seethis domino effect happening so quickly with something
so simple, So in a humancan still make those emotion based or just
a more complete informed decision that arobot can't do. Now, let's put
(27:38):
it in a bigger system. Let'sput it in the United States nuclear response,
which is my understanding. That's whythey keep it old tech for specifically
that that's exactly it, because youcan't hack into it because it's all on
technology that you can't connect to thewhat we now know as the Internet.
(28:00):
But what if we were putting thosedefense mechanisms, even the first line of
defense, and it and a flockof seagulls approached, but it had a
similar to the signature to a flockof bird or a group of planes that
were coming over from Russia. Well, now we start lobbing nuclear missiles around,
it becomes a real problem, rightbecause it says, I see this
(28:22):
radar signature, it matches what I'mexpecting to be from a plane that just
doesn't end well, right, likethere's a that's a big down the road
thing. But it's still something thatwe have to take into account when we
talk about this card for just useveryday people. You know, I'm using
technology on my computer, I haveyou know, it's like I have a
(28:45):
I have a phone. I'm surroundedby tech. How can every day how
can everyday people protect themselves from someof these emerging technologies while also still using
them Because I don't have any intentionof becoming amish, so I know that
I need to adapt to the changingworld while also still learning the new safety
and security. I don't want tobe like the old people, you know,
(29:06):
bless their hearts when email came out, they were clicking on everything and
you know, filling their computer upwith virus. I still want to learn
about the new things as they're comingout, right, and so I don't
think we will have a choice tonot be exposed to it. And the
reason I say that is because Ilook at stuff like self driving cars,
(29:26):
like what Tesla has been able tocome up with with the self driving cars?
How many times does that actually runinto somebody and cause problems? Does
it get hyped by the media becauseElon Musk is always in front of the
news. Yeah? Is it still? Is it still a problem that we
need to address on how to identifya bicyclist versus the road? Yeah?
(29:47):
But what happens if with software updates? What happens if the supply chain behind
the AI that we use, right, Like you mentioned your phone. Your
phone is probably a Google phone ora Samsung phone, and and it has
a special chip in it that's justfor AI computation and use to predict what
you're going to do, to makeit so that runs faster when it knows
(30:08):
what you're going to do when youwake up, it knows you have your
routine, when you go to bed, it knows you have your routine.
And so it makes it a littlefaster for you to do stuff on or
it makes it slower because it orin a new phone. Well, there
is that, right, there isthe planned obsolescence of it. But I
think as we see AI being moreand more a part of our daily life,
(30:30):
it becomes more imperative that the peoplewho are using AI in these in
these ways, banks, telephone companies, No, no, no, no,
no, We're the regular users.We need to be informed when AI
is making a decision for us.Okay, right, I need to be
informed to when my phone, whenwhen my bank is doing something based off
(30:51):
an AI model versus something that they'vedecided because I'm a customer, right they
My car can't go into AI modewithout me hitting a button? Okay,
stuff like that, Right Where Ihave a physical control where I can disconnect.
If I don't want to be connectedto the Internet, I can disconnect.
Yeah, we're thinking Will Smith backin his fancy car, he wants
to switch manual. Ever, whyjust mentioned manual? Nobody does that,
(31:11):
but you want that control. Hewas being attacked by robots, he need
it, you go to manual Andthat's the problem with a supply chain based
attack. Right. When we talkabout AI and we talk about how people
will use these models negatively, that'sone of them. Right. You have
to keep as much manual control asyou possibly can if you don't want to
have your life dictated by AI.But I think that that line is becoming
(31:34):
more and more blurred as we goforward, and it becomes more and more
prevalent in the day to day.So what does that look like right now
with our current technologies and the averageperson's exposure, exposure and usage of AI.
Well, I mean it shows upin different ways. Gmail. When
you're looking through a Gmail, youremail is being scanned by Google's AI algorithm
(31:57):
to build their large line, whichmodel that they use called bird. Your
Google docs are scanned by AI tomake sure to check we use AI.
And you talked about my background inCyberus, in digital forensics and incident response.
AI is becoming more and more usedto identify dirty data. And when
(32:21):
I say dirty data, I meanimages that we don't want to talk about
or think about, but we knowwe're out there because the net is a
real thing, and we see peoplegetting busted for Are we talking about like
child pornografficking, Yeah, human trafficking, child pornography, that kind of stuff.
The way that they are able toidentify victims of human trafficking and child
(32:42):
sex trade is because they can startto ident It's because somebody figured out a
way that if we start watching thevideos and the content that they're putting out
on the internet, we can starttogether. We can start to put places
together. We can start to usethe objects in a room to kind of
get an idea of what hotel we'reat and what city they're at based on
pizza boxes or pizza boxes, rightif you're their own and if you don't,
(33:06):
like, if if AI was todistinguish my room that I'm in right
now, it would see the picturesthat I've got, it would see the
books, and it would start tobe able to time it up come up
with an idea of not only whoI am, but where I am right
like you could. It could startto tell that I'm at home versus I'm
at work based on my background.And so we use AI to go in
(33:27):
and find these images so that you'renot paying somebody to sit behind a computer
in a dark room and look atall these horrific images. That's what Facebook
security does. Like it's a goodtechnology though, it's great something that can
be used for a lot of good. Yeah, it's great technology. It's
used for a lot of good andit helps law enforcement prevent the burnout of
agents. It helps see social mediacompanies that are trying to take down bad
(33:50):
data. Live streams of mass shootingsthat we've seen on Facebook have all been
identified by people sitting in rooms orAI that's identified. Dropbox uses AI and
the other cloud providers use AI topredict if you've got child porn based on
the image signature. Because if youbreak down an image into ones and zeros,
a picture that's been copied one hundredand fifty thousand times across the internet
(34:14):
will still have that same image.So you can start to piece together who's
got who's downloaded that image and goafter them and usually find more. But
when you hear stories about how drawboxinform the local police department that so and
so had one hundred and fifty thousandchild porn images in this Drawbox folder,
that's how they're not looking at everyfile that you upload. They're using AI
to go and do a pattern matchon the digital signature on these files to
(34:37):
identify child porn. So AI hasits place. AI has its place for
image recognition and identification, the sameway that if you take a picture of
something that you don't know what itis, or you're in a foreign country
and you take a picture of astreet sign, it's going to automatically translate
it for you and tell you whatit says or identify what it is that
you're looking at. So it soundslike the technologies, you know, just
(34:58):
like with anything that human can comeup with, could be used for good
or bad. And where the largepart right now? Because my question was,
you know, average everyday people rightnow, what our USA is looking
like. Sounds like right now it'sbeing used for good. Yeah, but
the powers in control of these technologiesare going to be the ones who dictate
how it can run used to getused. Right. If it can tell
(35:21):
me where I'm at now, now, it's like for bad, it can
start saying hey, this person's hereand start false flagging me or making a
video that has you saying something andmaking your lips move and with deep saying
the deep fix, right, that'sa real issue in politsan current events.
If I can get Barack Obama tosay something racist, or another presidential candidate
(35:42):
to say something racist, I cantake their campaign right right. There were
thoughts in twenty sixteen when the AccessHollywood tape came out that it was a
deep fake that somebody generated because itwas just audio. It's just Donald Trump's
like the part that everybody got offendedabout, and rightly so, the part
that everybody heard was just audio,right because the camera was outside the bus.
(36:05):
So theoretically AI could go in andpattern match Donald Trump's voice the same
way my bank pattern matches my voicewhen I call in and says say if
you say your code word to identifyyourself, and it uses that pattern match
to identify me from you. Likeif you called from my phone and called
my bank and it said say yourcode word, and you said the same
code word, it would know mefrom you. Right. But these deep
(36:28):
fakes can be very problematic, andwe don't know who's controlling them, we
don't know who's using them for what, and we don't know what the implications
are when it comes to how they'llbe used in the future. So more
than likely for your just you know, because I feel like you know,
politicians, celebrities, anyone who's inthe spotlight in Elon Musk, those are
highly vulnerable people that are probably gettinga number of cyber attacks on them every
(36:52):
hour, and they more than likelyhave a whole team of experts behind their
cybersecurity. You would hope, youwould hope, you would think, you
would assume, But what about justyou know, I'm a podcaster, I
have an email, and I havea bank account. Am I am I
at risk? Am I not aconcern? No, you're you're at risk
(37:13):
if somebody, if it he wantsyour if somebody wants your data, they're
going to figure out a way toget it. And you can put all
the extra security behind it. Youcan put all But we talk about the
voice pattern matching. Now, whatif somebody was able to go pattern match
my voice and get me to saymy code word from my bank into the
bank system, and it was ableto pattern match me because AI, how
does it know that it's not metalking? Risks an aim right to see
(37:37):
my AI that if you use yourface to unlock your phone, all it's
doing is taking a digital fingerprint ofyour face and your eyes and turning it
into an AM and the AI modelis saying, Yep, that's you.
Nope, that's not him. Yepthat's him. Nope, that's not him.
Somebody else is looking at this phone, So what's just And we've seen
(37:59):
their you can where law enforcement hasused that to unlock people's phones and force
them to unlock their phones. Right, because they can't unlock their phone.
There's no way to get into anApple phone. If I give you my
phone, there's no way for youto get into it. Theoretically, there
are tools out there you can useto hack it. It's like the mission
impossible, the fake face right,right, that even the fake face won't
(38:20):
work because it doesn't match the actualcontourists. It's mathematically extracted from my pretty
face, right if it does?Yeah, it is, But what if
somebody does get to that point?What guardrails? What how do we protect
ourselves for that level of impersonating?Impersonation attacks or nothing? Now? Right?
(38:40):
You see it all the time whenyou get I get text messages like
if I if there's a big meetinggoing on, our big work event,
I get text messages from fate fromscammers all the time. Hey, this
is your boss. I need youto go buy me a bunch of gift
cards and bring them back and sendme pictures because they're so obvious. Been
a personal trainer now for over adecade. Always get this one, like
once a week they'll they'll say thatthey need personal training for their three deaf
(39:06):
daughters, and it's always the sameone, and they ask if I take
credit card. I'm like, how'sthis still happening? Right? Because people
fall for it. It's the sameway that people how to how to spam
email keep happening because if if ifyou send out two hundred thousand emails and
two people click on it and itgenerates twenty dollars for you, that's twenty
(39:29):
more dollars and you had before.So it becomes the it's it's the law
of large numbers. And that's whywe work so hard in the information security
space to protect those resources, becauseyeah, you get that email or that
text message or that email, butsometimes it's an accounting clerk that's like,
oh, yeah, this is aperson that I've worked with before. I'm
just going to go ahead and wirethis money that they've asked me to wire
(39:51):
that I was planning on doing,and it goes to a bank account that
you didn't plan on going to.I've seen that happen. Yeah, well
I've you know, it's like there'sthe joke circling around them pinterest. If
you're planning a wedding, send aninvite to all the billionaires. You can
get their address and their secretary willprobably just send you a thank you card
and a check because they're not evengoing to look at it. So what
I'm hearing is that these threats arethey're there, They've always been there,
(40:14):
but they've been you know, Idon't want to put anyone down who's ever
fallen for these, because some ofthem are more convincing than others. But
it sounds like for the most part, they've been pretty easily identifiable. Oh
yeah, but because of AI,these threats are start they're starting to become
harder to identify and more what yousay omnipress more personal, more personal,
(40:36):
the more targeted towards you. Right, Like, if I can extract information
about you and I can voiceprint somebodythat, if if I can voiceprint somebody
you know's voice, you're you're sibling, your parents and all up and say,
Hey, I'm in jail in afour country and I'm at this location,
I this much money, wied whatare you gonna do? You're gonna
ut. That's a common senior scam. They scasing your citizens with that one
(41:01):
regular But now it's but now it'sgetting to the point where when they run
that scam, they're using it usinginformation that you've left out on the Internet
that they've figured out, and they'vefigured out how to make it more and
more personal and target. So theycouldn't call me and you at the same
time and run that same scam,but they'd call you and you something that
(41:22):
would trigger you to do it rightlike, And that's the scary part.
That's you're mentioning. You're mentioning somebody'spet or personal piece of information, right
something that nobody else would know thatyou may have that they've just happened to
extrapolate from somewhere. You don't evenhave to know where. It just has
to be something that you wouldn't normallytell somebody or sure, and that's they're
(41:43):
in And now is there AI doingthis or is this some you know?
Because it's like I'm imagining either likean evil Vicky hacking my bank account to
steal five dollars from me, becausethat's all that's in there, or I'm
imagining some kid in a hoodie hackingon his computer who specifically targeted me.
Yeah, but what's it's probably bothright. We're getting to the point now
when it comes to cybersecurity defenses thatwe were using AI to detect who's using
(42:12):
AI to hack systems? Because nowno longer do you need to know how
to hack. You tell the AIto do it and it'll do it for
you. The same theory of tellingchat GPT to go spit you out an
application, It will go and writeyou a whole application if you tell what
you want it to do. Right, if you know what If you know
what you want in a program,we're an application to do. You go
(42:36):
and tell a and it'll start weighingout what you need to do and giving
you the code to do it.And I know that universities are working on
I don't know if it's university specifically, but there are AI tools that are
being developed to help again lajorized essaysand research at universities, or at least
still want you to do the research. They still like you do to work.
But it's like, what's stopping peoplefrom using a chat GPT is a
(43:00):
tool to help enhance their nothing oreth against their essay nothing. I mean
you give give you give chat GPTor another AI model, just enough information.
It's going to spit out whatever youwant. Be at a seventy page
the research paper that you had todo, and it'll give you almost factual
sources. You still have to gothrough and check it, because I think
that's how they catch you. Butit's getting point now where people are writing
(43:24):
programs that will go in and insteadof asking the AI to spit something out,
it'll say you recognize this, andit'll say, yeah, I recognize
this. I wrote this at thistime, at this date, and it
was in response to the square becauselike we talked about at the beginning,
these AI models suck in everything thatyou teach it. And so it's a
real threat in corporate America where yougo a and you start typing, hey,
(43:46):
help me draft this respotter for legalfiling, and spit it out,
and it spits it out, butit leaves in the fact it leaves in
little nuggets that if you don't check, it's going to give it away that
it was generated by AI. SoI think they're they're using that kind of
like with that when we when wewere in college and we would upload the
(44:09):
picture the paper into the content thecourse management system and it would run a
plagiarism check to see if it matchedthe sources. Right, Oh nope,
this came from Wikipedia. He stoleit from this source. He didn't cite
this, right. I think it'sgetting to that point with AI as well.
The problem with AI is are theygoing to be fast enough to catch
every single model that's out there andeverything because everybody's AI is so big right
(44:32):
now, everybody's kind of generating theirown models and creating their own models for
the brown Now great, So that'sthe other thing. That's the other threat.
So kind of thinking about like acentralized or a decentralized type system because
they do want to kind of tryand keep it relevant to the movie I
Robot and walking about Vicky specifically becauseshe was the she was the eagle entity
(44:57):
in I Robot. She was thebig threat to humanity because previous to the
update with Vicky, they were justsort of the dumb robots that were that
we're pretty helpful for the most part, and they were helping Grandma stay independent,
cleaning the house, walking the dog. But as soon as Vicky came
out and she was able to startmaking these more more you know, artificially
(45:20):
intelligent, intelligent type decisions, shewas able to control the masses of robots
for large scale destruction. So that'sthe problem with centralization, right. So
in our current climate right now withAI, is it centralized, is it
decentralized? Are they all different?Like what does it kind of look like
right now? Just the just kindof paint a picture for me the general
(45:44):
landscape because I so so every companyis trying to figure out how they're going
to use AI in their day today operations because if one person's doing it,
especially with big tech sectors, rightand when we talk about the big
tex Tex sectors, we're talking aboutFacebook, We're talking about or sorry Meta
Meta, Twitter, Oh who else? The Microsoft? Google? Yeah,
(46:08):
Microsoft, Alphabet, Twitter, Google, the big the big five tech companies.
Right, it's a race to seewho can get the most AI and
get their models out first. Andthen every other company is trying to figure
out how they're going to use it. So so the big companies that can
afford it because it's expensive to RENEIare building out their AI models and trying
(46:29):
to get them out to the marketa face can okay. Other companies are
hanging on or investing in other models, where by the power of collectivity,
fifteen companies can get together build anAI model that suits their needs and they
work together, they share the data. So right now, AI is very
decentralized, kind of like the Internetitself. The Internet is not one thing
(46:51):
that you can blow up and takeit offline. Right. It's server.
It's server farms throughout the entire planet. Right, It's satellite, it's a
comp routing. But there's no onelike, there's no one router that we
can There's not some some shiny bluebox at the center of the earth.
It's like that is the Internet.It's a brightware, right. And so
(47:12):
as AI grows, we're going toeither see it become more decentralized, which
is what I'm and smaller in individualnodes to interact with each other but don't
depend on each other, which iswhat I personally am hoping for, or
you're going to see a consolidation ofall that information, because at the end
of the day, he who theywho have the most information have the most
power. And so if we seethat consolidation and it becomes centralized, then
(47:35):
it becomes much more scared. Sothat's if we're looking at just to kind
of put it in film terms,if you know, because we've got a
big MMA site coming up apparently.So let's just take Meta. Let's take
what is the new and Threads.Let's take Threads versus Twitter. If if
Meta Threads and Mark Zuckerberg, iftheir company becomes so big, let's say
there, let's say they're Skynet versusElon Musk's company. They could get so
(48:00):
big and they're Vicky. So youhave these two ultra powerful, ultra intelligent
entities that are that could be athreat, right versus a more positive potential
outlook would be it's just scattered allover the place. It's not that big
brother Eagle Live saying. It's everylittle person's interpretation of it. Yep,
(48:24):
No, I mean that's that's onehundred percent right. And the threat is
as that consolidation for power becomes moreprevalent, what does it due to the
independent researchers, what is it dueto the small, right, the small
mom and pop research shops, andI use that term to kind of put
it into Walmart versus Amazon terms.But we see it'll be similar to what
(48:45):
happened with Amazon and the and theretail stores. Right, it's going to
get consolidated. One group is goingto have control, and you can and
we don't know what they're putting intothose models. We don't know how they're
training those models, We don't knowwhat they're ingesting, and we don't know
the science of of what's outputting decision. And we open source that and make
(49:07):
it so that other people can seewhat's going on inside of these complex computer
programs. We will never know,and we will never be able to secure
it, and we will never beable to protect ourselves from a threat like
this. So it sounds like likelawmakers become a very important part of this
equation right now, because we dohave some very large companies with very large
(49:27):
monopolies. You know, Disney comesto mind. Disney owns everything if you
guys don't know, like Disney ownsESPN, Disney owns just about the whole
world. Sorry, you have thesecompanies that are as powerful as Disney how
do we fragment that back down andkeep it in a safer, less centralized
(49:51):
system. We have to elect peoplewho know what's going on, which is
the problem that we have right nowbecause if you look at the makeup of
our federal lawmakers and the people whoset federal tech policy, none of them,
Wow, let me rephrase it,very very very very very very very
very very few have any idea ofhow the internet or technology at all works.
(50:13):
Right. I have just guys wereborn before the internet. I mean
like they have of them. We'reborn before we had a computer. When
computer they in one room. Theyhad to read from a teleprompter and that's
it, and that's it. Everybodyelse is running the Tech Forum the older
and this didn't sound ages but andI really don't intend it to be.
But the older that our representation gets, and the more that we continue to
(50:38):
elect people who have zero clue aboutwhat's going on, go listen to a
hearing that they had with META afew months ago, and you'll just hear
how they don't get it. That'sthe scary part because the controls that we
need to have in place, wehave to rely on people in their eighties
seventies, eighties, nineties to comeup and say we need to protect this
(51:00):
and we need to do it thisway. And I think that that's where
people get lost, no, andthat that really, uh you know,
kind of makes me think again,if I robot and the inventor I can't
remember his name, who was theguy who invented Sonny the good robot?
He was one to invented all thetech middle of Tom's Lanning Alfred Lanning right
(51:22):
right right, So you know wehave guys like that, and it's like
he was aware of the threat ofVicky now right now already. I'm just
you know, I'm drawing parallels thatmay or may not existed on based off
of my limited information. But youknow, Elon Musk, he's like,
hey, you know, we needto be worried about this tech right now.
(51:43):
He signed the petition. He's like, we need to slow it down.
So he even he even even thecreators of chat GBT, I said,
you need to give us legislative guardrailsor this is going to get control
right right, And the problem iswe don't have people who know what those
guardrails need to be, and theydon't want to listen to who do Yeah,
yeah, similar to sort of theI just pulled it up on internet
(52:06):
movie database. Doctor Alfred Lanning.Yep, he was. He was create
He created Sunny the robot to helphim fight against Vicky because Vicky was the
centralized intelligence whereas Sonny was. Youknow, it's interesting because we kind of
went off on a tangent away fromthe movie and talked about these two different
systems. But it circles back tothe movie, which is so old in
(52:29):
the book even you know, it'slike, when was the book written?
The book give an idea the nineteeneighties. Hold on, Wow, that's
incredible, and that just goes toshow some of the brilliants of our forefathers
and what they were able to thinkof. I'm sorry it was written,
so it's still that's a long time. That's pretty old as far as these
(52:50):
I've got conversations. Yeah no,but that's still. But it goes back
in time. Right, here's thething. It greats back in time.
So it starts way way back intime and ends up in twenty thirty five.
Oh that's when the book takes Nope, that's the scary plass, a
scary part and the fact that let'swho are on track, right, I
(53:13):
know? And it shows the Itshowed a bunch of important concepts in modern
software engineering, which is vulnerabilities andpatch management to keep your systems from being
compromised and pretend to prevent them fromdoing things that you don't intend them.
It talks about AI and machine learningand the prevalence of it, and it
talks about centralization versus decentralized, whichis why the Internet was designed to be
(53:36):
decentralized. And it also talks aboutthe ethics that need to go into these
large models. If we're going tolet robots do things, we need to
control them, and we need toput guard rails around them. And and
those guardrails are going to be imperativeas we get, as we get further
down this rabbit hole of AI inthe modern age, and we can almost
even incorporate it sounds like we arebecause you know, Sunny, the good
(54:00):
robot in I Robot, he wasdecentralized to be his purpose. What he
was created for was to help fightagainst the big, big bad guys,
Vicky Skynet, whatever you want tocall it. And it sounds like we
do. We are working on creatingsimilar technologies. They're much smaller, but
we have our own small AI systemsthat help fight against the big bad AI
(54:24):
systems, and we're using AI todefeat AI. Right. That's the other
thing. Yeah, I don't thinkpeople realize is how much of the modern
network defenses and cybersecurity apparatus that weuse. He is based in using AI
to detect what people are using AIto attack us with. It's kind of
this weird circle. Well, it'sreally quite impressive because you think about the
(54:49):
history of humanity and it was youknow, early man. That's my dog
in the background. Sorry, guys, got a low budget camera. You
should. This is a good timeto mention that you can join me as
a Patreon and donate to the podcast. This is just my pilot season and
I'm hoping for season two to havebetter camera equipment so you don't need to
listen to the dog quenching his thirstin the background. But back to the
(55:15):
question I was asking, is youknow early man, it was bigger,
stronger, the bigger fist one,and then it was the smarter guy with
the tool. He had tools andhe could win. And then this guy
was even smarter. He could createtools and fire, and it was just
it's always been humanity escalating our knowledgeto enhance ourselves. And I talked about
this on another episode in this season, sold your augmentation and human augmentation in
(55:38):
general. But it's always seeming likeit's still humans. It's still us,
and AI is still just the weaponwe create for good or bad. And
technology in itself isn't bad. It'sthe humans who are creating it and what
we intend to do with it,which ends up it just feels like it's
(56:00):
just going to continue on. It'snot the problem. I just want to
say to your listeners. AI isnot the problem. AI is not the
It's the humans that are programming theAI that are the problem that we need
to put controls around. Yeah,it's going to do what it's going to
do based on the ones and zerosthat we point at it and expect it
(56:21):
to return. The controls and theguardrails that we keep talking about and people
keep asking for around AI is allto control the people, to keep the
people from doing bad, because atthe end of the day, you're right,
it's a machine, and the machineonly knows two things a one or
zero. Lots of ones and zerosbut all it knows isn't one in the
(56:42):
zero, and it's the humans thatwe have to worry about the most.
Yeah, and that's they're a goodthing for humanity because we've seen it since
the beginning of time, or itcould be turned out to be a very
dark thing that we're going to endup having to fight against. You know
what, And I'll just say thisagain, I believe in humans. You
know, there's a there's a fewbad eggs out there that can do some
pretty horrific things on a very largescale, but humanity seems to continue to
(57:07):
pull through. And I do believethat people. You know, people might
say terrible things, but at theend of the day, we do care
about each other. And if youwere you know, maybe you make fun
of someone on the internet, butif you were alone with that person and
you saw them suffering, our empathywould kick in and we would feel compelled
to care for and nurture that person. And it's all it's almost like we
(57:30):
need to decentralize humanity because we're somuch better to each other in small groups.
And the more centralized we get onthe internet, the meaner we get.
It's like ai nd and time youknow, I'm kind of having that
conclusion right now. It's just areflection of us smaller good, larger groups
bad. It's a law of largenumbers. Yeah right, Well, let's
(57:51):
kind of let's kind of tie someof these big l winks together. Let's
let's do our let's do our realitycheck. So pulling up our scoring scale,
we have pure fiction, speculative science, fringe reality, emerging fact,
and science fact. Let's score today'stopic on the feasibility of a centralized AI
(58:12):
system kind of just controlling. Youknow, I don't want to say robots
because we don't have robots right now, but let's just stay centralized AI controlling
society vire we out on that fivescience science fact. It is happening,
ye sweet, and maybe not bytwenty thirty five like the movie just but
(58:36):
with an ext fifty years, it'sgoing to be a wonderful. It's going
to be part of our reality.Okay, okay, And what can we
do about it? So let's kindof you know, we're wrapping things up.
Is there anything else you wanted tosay about this technology? And I
really just want to end this ona positive note. What don't we what
can we what can we do aboutit to protect ourselves? Should we be
(58:58):
scared? Number one explorer. Ifyou're curious about it, explort number two.
Vote and everybody says that, butvote and vote for people who understand
this and understand the reality. AndI'm not saying that you need to run
for office or be political if you'relistening to your life. Nope, I
(59:19):
don't want to do that. Youjust need to vote. We need to
vote smart people, and we needto get past where we are in the
state of politics. And I saythat as somebody who's worked in politics and
tech. We need to get tothat. And frankly, for a long
time wanted to do it or technologypolicy for the government, but it's just
become antenuous. We need to justvote for smart people. We need smart
(59:42):
leaders. And then we need toclap back on some of these companies that
have a lot of our data.We saw it in the news last week.
We're a quick book or these taxpreparation companies not quick books. I'm
sorry, we're selling tax information andyour most personal financial data to meta to
build into they're scoring mode. Yousee the credit card companies that are entrusted
(01:00:05):
with holding every aspect of your financialwell being getting hacked because somebody forgot to
unpack, you forgot to patch onesmall part of the system, and it
open things up. We need todemand accountability and we need to demand data
privacy and data security laws. Weneed to stand up as a collective people
and say, these companies cannot controlmy data list and I need to be
(01:00:30):
given the ability to opt out.And that's not just using something like a
VPN or like there. It's notsmall stuff that we can do. It's
going to be the thing and thatwill protect your security posture. But if
your tax preparation company is selling yourinformation to Facebook, there's nothing you can
do. Yeah, if your bankand another good example of the data leakage
(01:00:52):
is your bank and you get allthese insurance offers from you that are addressed
as being from your bank because theywant you to buy this party insurance.
They're selling your data to somebody andyou can't stop that, like if you
look through their degree. So becomeinformed about data privacy and its realities and
what we can the controls that youcan wrap around it. And the third
(01:01:13):
thing is be hopeful, don't getdown on where we're at technology wise,
because the technology is only what peoplemake, and so that means if you
want to make a difference, getinvolved, learn how it works, play
with it, become a veritable expert, and learn how it impacts your daily
(01:01:34):
life and advocate for that change.And if we do that as a society,
we can prevent some of these exploitsthat I'm afraid we're going to be
seeing in the next five to tenyears. Is this technology grows? Yeah,
well, thank you so much,Sean. Is there is there any
places that you would recommend for peopleto go to educate themselves further? Where
do you want to give us anyof your Do you want to give us
(01:01:57):
any of your personal information? Wherewe could find you if we have for
the questions follow you professionally. Yeah. So you can find me on LinkedIn.
Just type in my name, Ipop right up. You'll know it's
me because it's got my smiling mug. You can find me on Twitter,
apt inked I n k eed tatert A t e R. You can
find me on Instagram there too.I don't have Facebook, I'm sorry.
(01:02:22):
I am always on the talking andlecture circuit about AI and cybersecurity and cloud
operations. So just look for meand then go pick up Weapons of Math
Math Destruction. And then the otherone is The twenty six Words That Created
the Internet, which talks about howwe've we legislatively created some of these controls
(01:02:45):
that these companies get away with unintendedand the unintended consequences it's had. Those
two books will be a great primerand make you want to dive deeper.
But if you get if you getstuck or you have questions, please don't
hesitate to reach out to me.I'm pretty easy to find perfect well.
Thank you so much, Sean,and I will definitely put all of those
links to contact you in the descriptionof the those episodes and the links for
(01:03:07):
those books as well. Thank youso much. And it was so fun.
Has hey you two. Thank youso much. Reality Check Science Fiction