Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
First Contact with Lorie Siegel is a production of Dot
Dot Dot Media and I Heart Radio. In the future,
we could have these chips inside of our brains. Do
you think they could be hacked? In your brain? It's
just a button press. So instead of changing the minds
of one at a time, we can change the minds
of everyone in Michigan and make them vote one way
or the other suddenly, you know, creating a misinformation add
(00:24):
on Facebook looks like peanuts. What if we could order
our own dreams? And could our thoughts be hacked? Will
technology make some of us superhuman? And if so, would
we create a whole new species? Is death really the
(00:45):
final step? Or could our brains answer vital questions once
our bodies are gone. These are topics from a conversation
with Moron Sir. He's a professor of neuroscience at the
Kellogg School of Management. I'd also describe him as a creative.
He's brain hacker. At one point he was also a
bank robber. We'll get into that, but really he's a
student of humanity. He likes to test the boundaries, see
(01:09):
how far he can go, and challenge us to anticipate
what's coming next, even the worst case scenario, just talking
to him is like living in an experiment. I can't
help but question if he's manipulating my memory or changing
my mind while we're doing the interview. But I think
that's the point. We talk about disinformation and manipulation in
this era of tech. But things get pretty personal when
(01:32):
it comes to the type of research Morn does. He
focuses on the brain. He focuses on your mind and
your sense of self and how that sense of self
is increasingly hack herble in the modern era. I'm Laurie
Siegel and this is First Contact. This show is called
First Contact, and I talk about my first contact with
(01:52):
the folks that we have on and I was trying
to think back to our first contact, and I think
I want to say it was actually Madison, but but
let me coveat that with. It wasn't like we met
on the cheating website. Actually Madison. Um. We met because
of a story I was doing on Ashley Madison for
for a show I did Commerce a Human And you
were explaining to me all this research on how people
(02:16):
cheat and like when they're their weakest moments. Can you
just explain that really quickly? Yes? So, I think that
there's by now a lot of scientists to look into that,
and they try to understand, basically, in big words and
bad behavior, why do people do bad stuff? We know
that it's a bad idea, and somehow our brains allow
us to do that. And it boils down to the
(02:36):
fact that we use an equation in our mind to
decide where to do something bad, and the equation for
parameters like how like him to get caught? If I
do get caught, how big is the punishment? And mostly
the one that I found interesting is if I do
all of that bad stuff, really change my perception of
myself as a good person. And the other thy is
that most people find ways to justify to themselves why
(02:57):
they're great people, even if they do bad stuff. So prisoners,
if you take them and ask them why did you
do that, they say, yeah, I did steal the bread,
but the reality was that it was stale and my
family was hungry, and you kind of find a way
to tell a story. And this is a property of
the brain. The brain has to think that you're great
when it does stuff and to frame the entire world
around these things. So when we ask, like people cheat,
(03:18):
it's mostly because I can find a way to tell
a story that they're not cheating so crazy, and you
know all the stuff because you study the brain. Um
tell us a little bit about yourself. I mean, you
have such a fascinating background. You're like a hacker turned neuroscientists.
So I have I guess three different hats that I
were or it's used to where at some point one
of them is the one of a hacker. Where I
(03:39):
spent nearly fifteen years breaking into banks and government institutes
to try to find flaws in the security and teach
them how to better save themselves from villains. That's the
first career. Then a second career that spends the last
fifteen years as neuroscientists trying to study the brain and
understand how it works in a unique way that kind
of bowls from my tradition all hacking techniques. And then
(04:02):
the last five years I've been working as a business
professor as well, where I teach companies and NBA students
how they can use the knowledge about our brand and
about hacking to essentially understand customers better, to create value
in a different way, to align desires of people with outcomes,
and mostly how other people are influencing us and whether
(04:25):
we can stop it or not. And I mean used
to hack in the banks and whatnot. Yeah, so back
in the early oh, I guess mid nineties is when
I started as a kid. Up to about ten years
after two five dozen six, I was working in a
company that did what we called active marketing, which is
(04:47):
we used to go to banks and first hack into
the system and still money, and then go to the
boat and say, look, your bank is insecure. We were
able to steal a million dollars yesterday by just doing this,
and that we're going to help you now secure stuff
better from bad villains who wouldn't tell you that they
did it. But also we'll take a cut of the
money that we're saving you. And what we did was
(05:07):
essentially just that. How did you get into that? I
started a kid, so I kind of grew up in
the eighties as computers did, and I basically knew how
they worked. And I started by waking in Israel. I
was born in Friends and I was raised in Israel,
and in Israel. In the eighties, computers started to become
a household thing and if you wanted to add one
(05:29):
life to Mario game, the only way to do it
was to heck. So I did that, and at some
point the only way to do it added a life.
What are you talking about? So you play Mario this
like old game, and you're getting to level three where
you can't pass some monster. You try, and you try,
and you try, and it doesn't work. And then someone
teaches you that you can actually take a snapshot of
(05:51):
the game before you die and just after you die,
and see what's the difference between the image of the
computer before and after. And you see that one number
chain from two to one. So I guess this is
the lives. So if I just put it back to three,
suddenly let the game run, and I suddenly have a
third life. That's as simple as I can depict what
we did back then, but it was enough to teach
(06:13):
us how hacking works. It has changed a lot in
the last twenty years, but the concepts are the saying
you take a snapshotp before and after an event happens,
and you see what changed, and you learn how you
can manipulate that not too far from what we do
with the brain. We look at two events and try
to understand what happens in the brain and so you
went from being a kid who is hacking Mario games
and then all of a sudden you're hacking you decided
(06:35):
to hack banks. Did you ever do any I mean,
you're very I feel like you're very proper and you
do like ted stuff now and so like you have
a good name about you. Now do you ever do
anything like kind of illegal to So, first of all,
between age let's say sixteen to age twenty one, I
was a soldier in the Israeli army. Doing that as
a soldier, what does that mean? So I was recruited
to easily hollow me to be in a team of
(06:56):
hackers who do the same things I did as a
kid for Mario games, but now too big, you know,
governments and nation states and really apply the limiteds to
large scale army stuff. Take us on a mission where
we're going. I think that back then, when I was
a soldier, the most kind of controversial things were encrypted files.
(07:18):
So files used to be encrypted with some codes that
were not too hard to crack. But there was of
course a need for the Israeli intelligence to get access
those files. And they would bring a file to me
and say, this file is encrypted. There's a password, find
out what the password is, and sometimes even not just
the pastor but the thinking of the person who created
(07:41):
a password. So if I give you five passwords and
you crack them, and I see that all five of
them are the birthdays of your ex boyfriends, I can
start understanding how you actually think of passwords, so I
can crack the system by which you create them. So
this was my job as a soldier for many years.
I mean, it sounds like you were a hacker, but
you were always kind of very good with people to write,
(08:02):
because it doesn't sound like you can be a hacker
without the kind of hacker you were, without being very
human beings. I would say that more than ative scent
of hacking is psychology. Tell me about the first time
you broke into a bank. Did it get did it
give you a rush? So the first bank bank robbing
was after I finished the Army. We started this company,
(08:22):
and we were mostly hacking into banks virtually as in
trying from home to get into it. But in our
contract was some class that allowed us to actually physically
go and rob the bank. So not only were we
supposed to just hack into the bank, but we're supposed
to also and there, we go to the banks and
see if the cameras are pointing to the right direction,
or if someone left a post it note to the
(08:44):
password next to the computer, which are physical ways of hacking.
And at some point we decided that we're going to
actually exercise the rights to rob a bank, old fashioned,
you know, scheme masks and going to the safe. Did
take me to the scene because it seems like you're
just like talking casually about putting on ski mask and
rubbing a bank, Like where were you guys, like in
(09:05):
a cafe somewhere, and you're like, we're going to rub
this bank with ski masks. One of the people in
my team said, you know, we are allowed to do that.
We never tried. We should try. This is a small bank,
a small contract. Nothing bad would happen if we tried.
And we did, and we sat in the office on
a Friday afternoon and you know, took drawings of the
bank like like every movie that you've seen. This was
our life for a weekend. We decided if it's good
(09:25):
to come from the entrance or from the back, though
who should be in the getaway? Called all of those
things as you imagine a bank robbery only that we
picked a small one with only one teller. We had
to do a lot of like preparations with the legal
team to make sure that if we get caught, someone
can call and say, hey, these guys are my people,
and don't put them in prison. They actually were allowed
to that a lot of the behind the scenes that
(09:47):
I think bank robbers don't spend time doing. But in today,
we came to the bank and you asked me about
the moment before kind of when you you know, boys
turned into men. This was a moment, like I remember,
with all the legal prec options and with all the preparations,
the moment you enter a bank knowing that intense steps.
You're going to go to a person doesn't know anything
and tell her that it's a bank robbery and ask
(10:08):
her to give you the key to the voult and
that's gonna be a stressful moment for everyone is scary.
I don't know how people do it for a living.
Just for a day was really hard. Did you practice before?
Did you say that those words allowed? What did you
tell me what you said? I think I think we
had a really really kind of specific wording that we provided,
(10:30):
but it was along the lines of this bank robbery.
Please give us the key to the vault. Sounds really polite.
I don't know that. It was a very polite It
worked a few times, it failed as many. We ended
up in prison multiple times. The police comes, takes to prison,
and then a couple of hours later you get out
of your prison because a lawyer called a lawyer and
everything gets organized. But in the first few hours, imagine
(10:51):
a police officer getting a call, showing up and being
told by two kids, no, no no, it's okay, we're allowed
to rob the bank. Not a problem here, that's sure,
Sure you're allowed out the bank. But this is all
kind of like hack, this idea of hacking for for
good in some capacity. Um, this is like early phases
of white hat hacking. People don't understand what white hack
hacking is. That's what it is. It was. There was
(11:12):
a lot of a lot of kind of learning for
the system around this, Like we didn't have exact names
for what we did. It didn't have like how to
hire people because no one knew what it was. Now
it's a lot more organized now those companies who do
that and all the banks are required by law to
have hackers try to get into systems I think once
every quarter and report to the government how well it
(11:34):
w end. It became a lot more regulated. Yeah, our
world is so vulnerable, and there are good guys and
bad guys, and you actually need good guys like you
at the time before you turned into like a neuroscience.
We're going to get into all that. But breaking into
these places to show how vulnerable they are. I think
it becomes even more complicated right now because now there's
nation states involved. And I was just in d See
(11:55):
a few weeks ago in a conference where the big
questions that were addressed had to do with when is
hacking wall step or not. So if if one country
hacks into systems and say, shut down the power grids
and in doing so maybe close as a hospital for
(12:15):
a few days and maybe makes a few patients life miserable,
can you respond with the missile? The rules are not
clear right now, so most countries decided it's not and
that hacking is its own thing and you can hack back,
but you can't respond with military. And I think that
the U S that is trying to think right now
whether it should change this policy. So I was there
thinking about how hacking can be loud scale military. Well, generally,
(12:38):
I think that whether it is whateveryone agrees on or not,
I think at the end of the day it will
become the case because right now you can do real
damage with hacking. So it's no longer confined to okay,
stolid data. You can get into someone's pacemaker and keep
shuts down and kill them with a hack. So now
I think that when the hecks lead to civilian casualties,
(13:00):
that level countries are gonna start exponding with kind of force,
not just X. That's crazy. And I want to get
into brain hacking and all this stuff because this is
the stuff that you're doing. You're doing now that's so
fascinating to me. So for our listeners, it's like you've
always been my guy that I call that. I'm like,
h I'm working on this story and I know it's
kind of weird, But what's the future of X you're
(13:22):
involved in? Like the weird, weird stuff, um of like
the stuff that people think is not going to happen,
but it actually is. So like, what is the craziest
stuff that's coming down the pipeline When it comes to
the future of our brain and our thoughts? So crazy
things without still evidence that are actually going to be
real have to do with connecting brains directly. So that's
(13:42):
something that people talk about, but no one has proven
that it actually would work. But this would mean that
I somehow create a wire between my brain and your
brain and connect them, and in doing so, I'm not
just allowing thoughts to flow between you and my brain.
But actually the the always suggests create a new third entity,
that is this some of the two of us. So
what will happen if we do it? Is it immediately
(14:03):
a certainty emerges. It doesn't think it's you, oh, me.
It thinks it's its own entity, and it thinks of
our brains as parts of it. To all the science
fiction in the sense that we don't know how to
do it, but total science in the sense that it's
coming from real theory that looks at how brains look
when they're connect them. That's the most extreme I'm thinking
about right now. Another as extreme, and something that is
(14:24):
being explored right now is the question of what we
can do with a brain of a person who died.
So we think of death as the final step of
a person. They're no longer there, but it turns out
that when you're dead, your brain still has juice in it,
and the neurons can still work for a few minutes
to a few hours. And the question that scientists are
asking is can you essentially ask a person's brain questions
(14:47):
after the person is not there? You know, you execute
someone because you think they're the killer, and then in
the few minutes after they're dead, you asked them, did
you kill this person? And you get the true answer
because there's no more boundaries you can max the brain.
Now that's crazy, did that actually happen? So I think
that right now we do a very very limited version
of that, in that we take the tissue from the
(15:09):
brains of animals who we call it sacrifice but actually kill,
put that in microscope and still inject currents into this tissue,
and it still does things for us. If you know
exactly where the injected the current, such as it will
activate a process that is exactly like the process that
happens in the brain, you can essentially read the output
and know what's going on. So think of the following
(15:30):
simple but use, for example, to look at a picture
and tell if there's a cat or not in this picture.
Is something that humans are perfect at and machines are
getting really really good at, but they still work much harder.
If you take a baby and you tell it tell
me if there's a cat or not here, it would
know how to point to the cat or not. So
we're terrific at that, and machines are terrible at that.
(15:50):
So at the very least, you can take a person
who died at noon and for the few minutes while
their brain is still alive until the case, just show
it pictures of images and ask it that if there's
a cat and knot, and you can now classify images
using that people's brain. And there's people dying in the
millions every day, so you can just use the brains
of people who died to do chores for you while
(16:12):
they're not there to say, okay, I'm done with that.
We've got to take a quick break to hear from
our sponsors. But when we come back, the unintended consequences
of technology meant to enhance our brains. Chips and planet
in our brain could make us smarter, but could our
thoughts become hackable? We'll dig in after the break. I'm
(16:52):
very interested in neuralink technology, you know, everyone's talking about
this idea of implanting chips in our brains to help
enhance our brains, make us smarter, make us feel kind
of limitless, talk us through this, how close are we
and and what are some of the benefits, and then
let's get into the unintended consequences, which is always kind
of my the part that I love to kind of
(17:14):
jam on. So, so when you think about putting a
chip in the brain or brain implant, there's kind of
three things that you have to deal with, and the
neuroscience is the easiest of the three. That's the one
we pretty much solved. We still have to tweak it,
and it's still something that we need to make sure
it works kind of chronically because it's in their brain forever.
But that's actually the easiest part. We know how to
(17:35):
stimulate the brains. In fact, right now in the world,
there's about four tho people in the US that already
have brain implants in their heads for click up purposes,
something that helps them deal with parking songs or cleaner
depression or some other problems. So we know how to
put a chip in your brain that speaks to your
brain and controls it and works with it. That is
the easy part, even though people think it's hard. The
(17:55):
two components of this endeavor that are still hard is one,
how do you get a chip inside the brain? Right now,
the only way to get inside the brain is to
drill a hole in the brain and put a chip
directly inside. How to get it inside is a big problem.
Just falling a pill wouldn't work here. So we have
to find a way to get into the brain, passing
all the barriers that the brain created for chemicals to
(18:16):
get in. And it's not easy at all. So that's
where most of the work is spent. And the other
one is purely legal and regulatory. So even if tomorrow
you find a person who agrees to have their brain
exposed and says, put the chip inside my brain, I'm
okay with that, it's still not allowed. Doctors wouldn't be
allowed to do that. But also it's something that we're
(18:39):
worried would have consequences that we don't know of right now.
And that's the question you ask me. We know in
theory how to put a ship inside your brain and
to give it the powers to help you, but it's
unclear if we want that as a society. Do we
want people to have more i Q because we give
it to them and leave us motels behind? Is the question?
Right now, some of this this technology is being used
(19:01):
to help people write, people who need to move limbs
that are moving. I mean, it's actually already in use
right One person in France, in Grenoble, was given a
brain implanted controls exoskeleton and allows him to move. So
he was in a wheelchair all his life and now
he walks with limbs controlled by his brain the same
way he controlled his biological ones, but not controls robotic ones.
(19:22):
But he for purposes walks. Now that's where it goes. Clinically,
we find ways to use it to help people, people
who lost limbs, people who lost some functions or lost sensations.
We can give it to them, and those are the
people who get chipped right now in the brain. The
fierce less opportunity is that suddenly people would want it
for pure enhancement. So when Elon Musk and his team
(19:44):
talk about that, they're not talking about necessarily helping people
who lost the function. They're talking about basically giving it
to Elon Musk and making him smarter better. Having the
entire world's information in his brain, and that is when
we start to talk about things that be befall in
what they give us, but also is key. So you
can imagine that having all of Wikipedia in your brain
(20:05):
as uh simple query you can get all the data
you want would be remarkable. You don't have to study
anything historic wise. You just have to ask the question
when was the first evolution? And the number certainly is
gonna appear in your mind the same way when you
ask how much it is two plus five? The number
seven appeals in your mind without having to process it.
You don't have to kind of do the same thing
you do with your phone when you type the digits
(20:26):
and kind of ask how much as eleven times for
you seven, get the answer and now read it. It
will just emerge. That's fantastic. What else let's do? More So,
it's like if I hear a song, I could pick
what's that song and it just tells me Yes, you
can probably immediately compare it to all the songs out
there and see what it reminds you of, and like,
you know, do this thing like people like you also
like that song, so it will immediately kind of have
(20:47):
a playlist of things that you play, because you also
know that you enjoyed these songs, so it will just
choose the next one that you would enjoy as much.
It will tell you when you need to go to sleep.
Humans are terrible in sleeping. They always delay sleep and
it's hurting us. It will just shut down. It says
like it's not going to sleep. I'm shutting the brain off.
It will choose the best food for you out of
the menu items, so you will eat the healthiest things
(21:08):
for you. It will tell you how to be more
concise in speaking, so it will pick the words that
would help you, like test us in your praying. It
will probably do a better interview than I'm doing right now.
If you do a bettera I'm doing right now, it
will be you know, high flight within trading in your mind.
Instead of having to align computers to tell you what
stocks to invest in, it will do all the good stuff.
(21:30):
Where do I sign up? So here's where you kind
of ask usself do you want it? The question that
you should ask is, let's say I do that, but
so all the other people around me, and suddenly they're
like different chips in the market. But you can afford
only the one that gives you ten IQ points, but
your friends can afford the one that gives them one
i Q points. Suddenly all your friends become smarter and
(21:50):
you're just a little bit smarter. What's what's gonna happen?
So right now we have this kind of understanding that
there's inequality in the world, but it's only inequality at
the level of money, pretty much in the resources. So
one person is rich affords to buy the best food
out there. One person who is pool affords only some
foods and maybe get sicker, and that's that's the inequality.
(22:12):
But at the end of the day, they both have
to eat. They both speak the same language, they both
live in the same they breathe the same air, They
kind of interact in the same world. Once you think
about making superhumans basically those limited as people, they might
be totally different species than us. So so the example
that I sometimes give is us compared to apes. So
the you know bonobos that are really really clever animals,
(22:36):
they basically us in terms of DNA. There's only two
percent difference between us and them, and it leads to
an enormous difference in practice, Like we treat them as
animals and we're humans. You know. We put them in
cag as, we give them bananas, but we don't really
try to interact with them. We don't say, let's ask
the bonobo what she thinks about this particular idea in politics.
We treat them as animals. If you know, you think
(22:59):
of the smart just ape out there that can communicate
using symbols, We say, look at this one. It almost
interacts like a two year old kid. How amazing is that?
Now skip to world where you can put neural implants
in the brain. Those superhumans with the hundred fifty IQ
points about us will probably think of us. Do we
think of the bonobos? They would say, look at this one,
(23:19):
she's so smart. You can do differential equations in her mind,
just like two year old. Team here from our species.
So beautiful, let's put our back in the case. Now,
that's kind of the world that we can imagine if
we start to creating in this and this means that
there's going to be inequality at the level that we
have not experienced, where religion. The pool in i Q
really on two different species. They're not just like two
(23:40):
people who eat a little better than the other. There,
you know, communicating differently, they might have technologies we don't understand,
really a different world. Do you think that this is
part of the because you hear folks like Elon Musk
talking about this which with such enthusiasm. Is this part
of the conversation do you think in Silicon Valley? I
think because it's not technologically something that now to build
(24:02):
right now, people push it to the side and you
know it and they kind of deal with that later.
Is this thing the same way we talk about AI
We kind of say, yeah, yeah, one day it's gonna
be smarter and better than us, but not right now,
so let's push it to the side and let's not
talk about it. And I think that that's a mistake.
I think that, you know, in the deal world, we
should take about things before they become reality, once we
(24:23):
start to explode them and prepare for them. So I
think it's actually this larger issue of like, well, the
tech titans are building out this technology, if folks like
Ellen must talking about it, politicians don't quite understand anything
about it. We have people like you who are talking
about it in in theory as it's being built out,
but there's not one like larger entity that's saying Okay, wait, guys,
(24:44):
we've got to figure this out. And then you have
like China over here doing all this crazy stuff with
different boundaries than than we have. So it's it's kind
of a whole stew of interesting problems that, when put together,
could could provide for a dystopian future if we're not careful. Yeah,
So I think that you're really right. There's there's a
really big gap between what regular is talking about, what
people talk about in Boss and Silicon Valley, versus what
(25:07):
we talk about as scientists. I think that if we
increase the ball by starting to ask petitions those questions,
they would be required to learn about that and they
would know. So I think that here is the job
of society to just ask questions and embarrass them a
few times, and then they would do a better job.
I think that's what's gonna happen if we start asking
that impeatedly in town. Hold. Well, So, speaking of politics,
let's talk about the future of disinformation. Because what I like,
(25:30):
what I've always been interested as well, everybody is kind
of yelling about like one issue, and and this is
why I've always kind of related to you. You're kind
of like ten ten steps ahead, being like yeah, well
but it's like your ten steps ahead being like no, no,
Like you've got to pay attention to this because this
is like coming down the pipeline and no one's even looking.
And you said something about someone being able to write
(25:52):
code into your brain and like change your mind. This
is where like it's like the emoji with it your
head blowing up kind of thing. Should we go there
and terrify people? So I think the bad newspaple, it's
already happening. This is one of the things that we're
not talking about like file future. It's something that we
do in the LABL every day now. Change people's minds.
Writing small thoughts and the technology to right big thoughts
(26:14):
is all they in the testing, So we should tell
everyone about all of them. So in a way, first,
let's start with the kind of baseline, which is marketing
has been doing that for the last eighty years. Anyhow,
people have been finding ways to get into your mind
and change it in small deviations all the time. We
just not too worried about it because we think that
we know how to do that. The reality is that
(26:35):
we don't. Reality is that if a company decided that
they want to target you and change your mind. With
all the might of their marketing team, they could do that.
This is why all of the companies can value are
hiring neuroscientists right now to help them in the marketing world,
basically to kind of apply addiction methods to get you
to spend more time looking at their ads or clicking
(26:57):
on their content. That wasn't that like very two and fourteen,
or aren't they on like at least publicly being like,
we're not doing that anymore. No, I think I think
that they are just more efficient and more clever about it.
But I think that it's the same my students when
they finished their pH d have a competition between going
to other universities and the temptation to go to sleep
(27:18):
in Valley and be hired by the same companies that
we know from as employers who want them to do the
the same thing that they did. Is neuroscientists in the
service of those companies. So this is this is kind
of the baseline that marketing has worked. You don't know
if you like Cola or not. You have no idea
if you really like it, because your brain has been
trained for twenty five years to think that this is good,
and the value of sugar was kind of aligned with
(27:41):
how much sugar is in calcola, such that this is
now what you think is good. You really have no idea.
It's really how to disentangle what you think from what
you were trained. Okay, so that's level one. Level two.
We've been working in the last fifteen years as scientists
on changing memories. And what changing memories does is it
creates a new narrative for your life. How do you
(28:02):
do that? There are simple ways and complicated ways. The
complicated ones involved taking your brain in moments where it's vulnerable,
when the gods are down and changing things. One of
those moments, for instance, is your sleep. So right now,
there are studies that show that when you're sleeping, we
can find a specific moment where your brain is listening
to the outside the world and actually rewrite stuff into
(28:22):
your brain, and you will wake up with a different memory,
not knowing that someone changed it. That's the complicated one.
But we're working with in the leb right now. So
it's something that's happening. So you like sneaking into your
students rooms while they're sleeping and messing with their brains Essentially,
we bring them to the study by telling them we're
gonna have a study on a sleep pattern. Go to
sleep for a couple of hours in the lab, but
we're not telling them what the study is really about.
(28:43):
And we listen to their brain and wait for it
to get to the right moment where it's kind of
listening to the outside the world, and then we use
our factory cues smells. Essentially two inject ideas in their
mind and change those ideas using other smells, and then
when they wake up they have different thoughts. The example
that we do now is smoking. That's not a bad
thing because we're still a science lamp. So we bring
(29:04):
a smoker to the lab. We have him go to sleep,
We wait for him to get the right moment where
his brain thinks about smoking or things about the past,
and we spray the smell when the contin into their
nose to make them think about smoking. Then we spray
the smell of boat and eggs, which is a smell
that's known to penetrate the brain make you have bad thoughts,
but not to wake you up. Depelling of those two
creates a connection in your brain between smoking and something
(29:26):
bad such it when you wake up, you don't want
to smoke anymore for a few days, and you have
no idea why, Like you just I don't feel like smoking.
So this was me changing your brain behind your back
in the course of a few hours to something that
we think is positive. But as an evidence, it suggests
that we can do anything. We can change your brain
to anything else. So there's been studies that looked at
trying to make you heat healthy. So you've got to sleep.
(29:48):
They kind of inject an idea into your brain about
what you should eat when you wake up. When you
wake up, they say, here's a buffet, choose what you want,
and people choose healthier options because you're injected the ideas.
There's one gup that looks at racism. They take people,
they test racists they are before they go to sleep.
Then they do something clear brain and when they how
do they test how racist that? How do you get
like a casual test of your how racist you are
before you By the way, the thing that's so funny
(30:10):
about you is usually, oh, and we just test how
racist you are before you go to sleep, Like, what
what do you mean? So this is a test that
was invented. I think at half and a few years
ago called the I A T D Inmplice association task
where you basically show people things like pictures of African
Americans and Caucasions and you ask them to put the
(30:30):
African Americans and the bucket on the left by pressing
the left l O and put the caucasion on the
back on the right by pressing the right L. And
there are the fact that you can without making mistakes.
Or a picture appears and you have to quickly put
it left or right. And then they show you words.
It's some of the words are positive or negative, so love, beauty,
perfection versus bad words like anger, disgust and so. And
again you have to put them on the bucket left
or right. And then they said, I'm gonna show you
(30:52):
either face of say Caucasian person or nice word. Put
them in the same basket. Oh, we're gonna show you
a African American face or nasty word and put it
in the right paxicket And again do the fast as
you can be like making mistakes. People do it still
very very fast. And then you say, now we can
adverse it. We're gonna show you either African American face
or a good word and put those in the right basket,
(31:13):
or Caucasian and bad word and put those in the
right basket. And certainly people start making mistakes because they
have an inflicit association that black person is bad, white
person is good. So it's harder for them to do
the reverse association. They do it either slower or make mistakes.
And that's showing that you have some kind of implicit
bias against African Americans, even if you don't exercise it,
you don't really do that. And it's too even for
(31:34):
African Americans. Even they they fail the same test, and
we do it with women in wages. It shows that
people have a bias towards paying women less than men.
They could do that at like tech companies to like
you know, when they need to hire more women or
more diversity, like in companies. All of those studies show
that most of us have these biases. We're kind of
trained by sciity to have those. And whether you're a
(31:55):
women or a man, young, old, black, white, all of
those you still fail the same tests. But now this
was the case and there was no way to change that.
What they do right now is they take the test first,
they show that you have the bias, then you've got
to sleep. They basically try to kind of use old
faction of the ortho accuse to essentially remind you that
you should break the bias in the right moment and
(32:17):
when you wake up. They take the same test, and
what we show is that people actually change their biases.
They become a little bit less likely to make a
mistake when they see an African American trace and a
good word and vice versa. So all of those are
in the realm of the house things because they involved
in personal lab, putting stuff on their head, evging to sleep.
Get the right moment. Someone has to look at your
(32:38):
brain and sleeping to know that you're in the right
moment and do stuff. It's complicated, it's still very clear
what you need to do, but it's a cumbersome process.
There's a much easier one that is done in a
lot of lambs. Right now. We essentially convince a person
that a memory is not what it was. Just in words,
we have a studied were doing right now where we
bring it to the lab and we ask you to
make a simple choice. We show you two pictures of
(33:00):
two guys that you don't know and we say, tell
me who do you find more attractive the guy on
the left of the guy on the right. And we
have cards in our hands with those pictures, so you
might point to the guy in the left. And it's basically, well,
it's a version of tinder or hinge. You see a
picture and you have to swipe left right. Only that
you see two guys. You have to choose which one
is better, So you have to choose one. So you
show you two guys and we ask you who do
(33:22):
you find more attractive, and you say the guy on
the right, And then we give you the card that
you're selected, and we ask you to hold it in
your hand and explain to us why you picked this
person so you might hold it in your heart. And
you say, I really like his smile. We stay fantastic.
Let's pick another pill. So we picked two new cards,
two different guys. You do that a hundred times, so
every couple of seconds, you get a new pair and
you make a choice and you explain it. And here
(33:43):
is the trick. Every now and then, let's say, every
twenty trials, we give you the card you didn't choose.
So the person who gives you the cards the magician
and use the sleight of hands to give you the
other options. So if he chose A, you would give
you BE. And what we see is at first people
early notice that they didn't receive what they chose. It's
the kind of step one and changing memory and step
(34:05):
two when we asked them to explain, they can't be
an answer, So they chose A. I give them be.
They take me and they say, I chose BE because
I really like his. They just come up there even
though they didn't choose that person. And in that moment,
what happens in their brain is that their brain now
creates association. And remember we thought something they didn't choose themselves.
So if you ask them more and more about why
they chose options B, their brain is going to commit
(34:27):
more and more to this choice, so much so that
if they come back to mow and you give them
the same options again and you say choose again, they're
going to choose BE now. So think about it. The
constom business. You got the supermarkets to choose a tooth spaste.
You debate between the cold gate on the left and
the crest on the right, and you run a complex
analysis of the two and in the end, you choose
cold Gate, you put it in the basket, and you
go shop for other things. Somewhere between the moment you
(34:49):
chose the col Gate and they checkout, I was thinking
their basket and I placed the call get with Crest.
Most likely you're gonna buy the Crest. You won't know it.
And if I stop in the way outside and I
say I'm from part and Gamble and I'm running a mark,
I think it's such a campaign. I want to know
why you chose Crest. You wouldn't say I have no idea.
You wouldn't say I chose cold Gate and someone flipped them.
You will defend the choice you didn't make, and in
(35:11):
doing so, you convince yourself if you want it so
much so that tomorrow you actually buy the Crest. So
I can now make you change your memories in a
very very simple way, and by asking you why make
your brain computer story that you will now believe and
go forward with. We've got to take a quick break
to hear from our sponsors. But when we come back,
try this one out. Imagine a future where companies could
(35:33):
control the content of your dreams more than that after
the break, So how could that be weaponized in the
(35:58):
future for Pluto goal uses because we're all talking about
this information online and the fact that people don't really
trust what they see anymore, and there's you know, nation
states from manipulating social networks. But you're talking about something
a little bit different. So the tagline for this listarch
in our lab is don't believe everything you think. And
the point behind that is that evolutionary, we were raised
(36:22):
to have a brain that manufacturers reality for us. So
everything in our brain comes up with is reality. We
never doubt our own mind. So if you have a
thought in your brain, it's real. You might spend a
lot of time vetting thoughts that come into your brain
by being skeptical, by asking questions, by really exerting your
might to make sure that nothing comes in that's not true.
But once it's in, you trust it. If if tomorrow
(36:43):
someone asks you what did you do yesterday, and your
memory tells you that you were here with me in
the studio recording this podcast, and this person said no, no,
yesterday you were playing golf with me. You say, no,
I have a memo that said that I was with
mo one in this according studio and they said no,
you're in golf coast playing gold with me, and they
start showing your pictures of the two of you playing
golf for brings, ten friends who would vouch for the
(37:05):
fact that you were with them. Nothing will change your mind,
doesn't matter how many people would claim that you were
not where you were. You have a memory of reality,
and you think this reality and you would not change it.
And this is how we evolved because we had to
trust our own brain. Now that someone can actually hack
into our brain when we're sleeping using experiments like to
what I told you, where we flip options and make
people come up with answers through the choices, and in
(37:26):
that doing so changed their mind, you wouldn't be able
to trust your own mind anymore. So suddenly you really
would have to doubt your own thinking and not know
how you behave and and the world living right now
relies on you having full understanding of your own brain
for everything. But now this is no longer a fair game.
Your brain is vulnerable. I think that the best answer
(37:47):
I could give people when they ask me what they
should do about it, is to think of themselves. On
April Fools, so apple food is the one day of
the year, people actually skeptical of anything. So if I
tell you an apple fools, you know your mom called me,
you would say waits. Normally I would trust him, but
today's that will fools. Maybe he's lying, I mean one
of that, even this piece of information. And that's the
only time we actually are a skeptical of everything. You
(38:09):
would have to play fools every day with your own thoughts.
So why now is every day like you provos? So
I think that the experiment that we did with politic
experiences is just an example of how easy it is
to flip mindset. A person comes on the street and
it's told we're gonna do a political pole. We're gonna
ask you if your questions and we just want to
kind of figure out who you are. So we asked
them ten questions and we sit with the paper and
kind of mark the answers, and question one says something
(38:31):
like a on a scale on to tend, how full
or against abortions are you? And the person says, I'm
nine pro choice, So that's their answer. Then we asked
them about climate change, how much do you believe climate
change is really man made? They say seven, and we
mark what they said and We asked them many many questions,
and they mark the answers, but the reality is that
we don't mark what they said. We mark different things.
(38:53):
So if they said nine to pro choice, we put
a six. If they said seven to climate change is
real and men aid, we put five. So we kind
of change their answers a little bit, not totally the
other extreme, but just a little bit. And then we say, okay,
let's tell you all of your votes and see what
you come up with, and kind of politically and we
tell you all the vote next to them. With them,
they kind of story it, and what comes out is
that actually right leaning day taught on themself until this
(39:16):
point is like a very kind of liberal Democrat, but
somehow the questions come up as if they're right leaning
kind of Republican. And then we ask, can you explain
to me what this is? I guess sometimes not always
depend on how many questions we ask and how kind
of good we are in doing this thing, they're gonna
start coming up with an answer that says, actually, now
that you say it, yeah, I was raised a liberal,
(39:36):
but when I see the world right now, the world
becomes more polarized, and I think that we should be
a little bit more proactive, and they start coming up
with answer and the more you ask them questions, the
more they create assssions in their mind that aligned with
this story that's not true. And now they go into
the world, into the wild we call it, with a
different thoughts of themselves, a different story, and they will
(39:56):
totally go with this story, and they talk to their
friends and convince them in that they go to an
echo chamber that now there's a virus in the echo chamber,
which is a person that comes into the echo chamber
that's safe with a new thought and will start infecting others.
So why are doing that? We didn't it because we
need to prove it works because then I can tell
your audience about that. Then they become a lot more ware.
So once you know it, it already doesn't work as much.
(40:18):
So this is all on very a very human level.
And then now when we look at the technical level
of one day, when there's actual technology inside of us,
it could be hackable. Right, Um, you're someone who has
a hacker background and now you're neuroscientists. So in the future,
we could have these chips inside of our brains. Do
you think they could be hacked in any capacity. This
(40:38):
just makes it much more efficient and a large scale.
So up to now, we had to take a person,
stop him all her on the street, ask questions, do
a ten minutes process with them. Once a chip inside
of your brain, it's just a button press, it becomes
a lot easier and a scale. So instead of changing
the minds of one at a time, we can change
the minds of everyone in Michigan and make them vote
one way or the other. That's that stories, and I
(41:00):
think the reason we should talk about it right now
is the poof of concepts that comes from labs just
shows how it works. But that's what we stopped. We
write a paper that says this is possible. You can
hack into our brain and change person's mind That's it.
When we talk about disinformation and what I do think,
that's that's what's coming down that we're not talking about yet.
So I think the biggest fear is that. So I
(41:21):
think that suddenly, you know, creating a misinformation add on
Facebook looks like peanuts when you talk about I can
just change your minds as you go to the voting
ballot and just make you vote what I want you
become a puppet, and the puppet teal is the biggest fear.
It's scared. And I think that people speak right now
mostly on the marketing version on the thing that is
still I convince you by showing you the wrong head
(41:41):
and so on. That's the old fashioned thing. This is
what we did since the you knowes that the few
is that it's going to be a lot bigger and
it's going to be directly at your brain and you're
gonna welcome it. So I think the way I see
it is that the Silicon valid guys, they're going to
steal the positives of a chip inside our brain, the
high fluity trade in the Wikipedia and the solving cure
(42:02):
for cancer, everything that's great, and that's why they're gonna
put it in their brain. And they're gonna create the
entrance point for the villain who can now hang into
their brain and when they ask for question of Wikipedia,
when was the French Revolution? But they're gonna get is
also voted this way, and that's I think kind of
when it's going to go. And the thing is it's
(42:23):
a RaSE to the bottom. So if someone Intelicon Valley
does that and he is able to say I think
better and invest better than all his friends. Then the
friends would have to do it as well, and suddenly
there's inequalities. So all of all of the people in
not just a Cilicon Valley, but in Texas would want
the same chip in their brain, and suddenly Alabama wants
the same thing, and before long China wants the same thing.
(42:44):
And we're gonna bring it ourselves because we can't afford
to not have that, and this makes us all vulnerable.
Mm hmm, that's interesting. I sounded really stopping. Yeah, that
really just open pretty quickly. I feel pretty deprised there,
like any silver lining there, So I think that, yes,
I think it's little is your audience. The good news
is that we decide so it doesn't have to happen.
(43:04):
So historically humans have not been great in teching technologies
and not using them for bad things. Every time we
have technology that could be used in a negative way,
it was is a negative way. But we also have
shown increasingly in the last fifty years, if we can
also honor technologies. We have weapons that could kill a
lot of people, and we decided together that we're not
going to use them. And this happened multiple times and
it happened across the world by many countries. So in
(43:27):
the same way, we can decide how we regulate those
implants in the brains such that one thing that's good
happening without the bad sides. What about is Facebook just
about control labs, which is like you know this mind technology.
I'm I'm kind of obsessed with this idea? Is like
this Facebook down the line or these companies slegated by
ad space and our brains in the future? Is that?
Do you think that's sci fi? I think that the
(43:50):
sci fi or the future. The CEO of Netflix a
few years ago I was in a conference and he
said that on stage. He said that the biggest competitor
for Netflix isn't Hulu or YouTube or any of those.
It's sleep. So the people sleep for hours and they
don't watch movies. Their brain is active and no one
uses it. Right now. We have studies in our lab,
but we try to manipulate dreams. We try to basically
(44:10):
create movies in your mind that will come when you
sleep and will be content for you. This means that
suddenly Netflix and Hulu and YouTube and all of those
are going to have one more canvas for content that
they're gonna start using They're gonna have dreams by Spielberg
or dreams by you know. So that's that's the kind
of the main way playing in right now. And I
(44:32):
think that to your question, this really means that the
competition is going to be welcomed. People are gonna want that,
and they're gonna bring in the bedsides with it. Now,
there's an episode of The Simpsons that you watched long
ago that is my kind of go to, and I
talk about greed. Homer Simpson sits with Mr Burns and
(44:52):
they suddenly become friends for a minute. Mr Burns, if
the other doesn't know, is the rich owner of the
nuclear plant. Is kind of the depiction of the rich
I and Homer Sipson tells him, Mr Burns, you the
literest guy I know. And Mr burn says, but I
would give it all up for a little more. That's
kind of how humans are. Every time there's a chance
for a little more, we want that. And we have
(45:14):
now a chance for a little more i Q. A
little more control of our own mind, a little more thinking,
a little more access to our own brains. No one's
gonna want to say no to that. We have access
to more sleep, We have access to control of our behavior.
Companies can control our dreams and give us the best content.
We would welcome that, but in what I mean that
we're opening the floodgate for also bad things that we
(45:34):
don't think about right now. I mean, that's crazy. The
idea that you could order dreams. You really think that's happening.
That could happen, like like we could have dreams sponsored
by Sibert. I mean, that's not the worst. I I loved,
Like you know, Eat, It's my favorite movie. I mean,
so I think that it became it was a science
fiction a few years ago by that would be such
a great movie. If I can watch Eat in my
(45:56):
sleep because I just don't have time in a day.
The big kind of question that you have means like
is it possible or not? Right now, the science is
at the level of we can induce you having a
good dream, not knowing what it's about, but just spray
the right smell and your brain goes to a positive dream.
That's it. We can make you have a bad dream.
We inject the wrong smell, you have bad dreams. We
(46:16):
can sometimes control a little bit of the content. So
if I do anything with water. If I spray water
on you, or if I dip your hands in water
while you're dreaming, you will incorporate water into dream. You
will see waterfalls or the ocean or a boat. So
we can kind of induce very basic ideas, which means
that we can control your dream at a very very
good level. But this historically is the gap between a
(46:38):
proof of concept and engineering. So now and dreams are
no longer something that is totally kind of black box
for us. We know how to change them, and now
it becomes an enginery poblem like finding the right smell
for every particular concept, realizing what makes you dream of
your mom, that's your dad, stuff like that. It becomes
a race by engineers leed by neuroscientists. Neurro scientists have
proven that it works, and now we'll leave it to
(46:59):
other to kind of perfect it, which means that you
can get soon to a point where you at very
least can choose what memory you want to reactivate in
your dreams. So you go on a date, you come
back home, you've got to sleep, and the date is over,
even if you're next to the person you were with.
The sleep kind of separates you. So we can now,
at least at the very minimal thing, make the dream
(47:21):
go longer by finding cues from the awake style. If
that will let you go into the sunset together in
your sleep. But in the future we're ordering up dreams.
So I would I am gambling on that because companies
come to me Evan now there and say we wanna
think about that, and I help all of the companies
do that. Inst the craziest thing like a company you
I mean you hear, It's like, what's the craziest thing
(47:43):
the companies asked you to do? So I think that
dream manipulation is sitting somewhere on those companies are asking
you to companies famous companies like the Silicon Valleys companies
that you know. One of them came to me a
number of years ago when I give a TED talk
about dream manipulation, and one of them was sitting in
the audience and said, like, who want to incorporate it
(48:04):
in the next version of our big product? And I said,
it's very, very unreal. Right now, it's just like a
proof of concept, and our lab really is on the
mission of just showing that something is possible, not in
making a product, and at the time they were ready
to do anything, like they were ready to basically buy
our lab and move us to California so we can
develop that. And at the time I mostly didn't believe
(48:25):
it's possible to do it as fast as it thought,
so I said, not something in my life. But since then,
a lot of companies after that, so we're no longer
talking about science fiction. The big companies who play with
doing things to you when you sleep. I mean, don't
you feel like now, like stillbot is gonna be a
little careful about that because like now we're we figured
they did a lot of things to us that we
didn't even realize with like addiction and mental health, and like,
(48:48):
I don't I don't know if I would trust like
Facebook or Twitter to do things to me and my
sleep anymore. You know, that's interesting kind of psychology. On
the one hand, I think that things have not changed dramatically,
like they know, we talked about the beginning about cheating.
One of the things that about cheaters is that when
they get caught. At the moment they get caught, they
immediately promised to themselves into everyone else they do it again.
(49:10):
But if they get forgiven. They actually go back to
their bad behavior because now they have evidence that it works,
like they get caught and nothing happens, and and they
do a better job in hiding it. So the second time,
they actually now know what ways they fail the first time,
so they do a better job in hiding it. So
in that sense, I think that I don't think that
things have changed dramatically in Silicon Valley. I also to
(49:31):
defend our friends there, because we know many of them,
and we know that no, no no, I think no one
is bad there. It's not a malicious or act of
trying to do wrong. It's somehow the system and the
structure of the world that suggest that good things are ahead,
and their engineers who want to perfect them. So in
that sense, I think that to just blame it on
(49:52):
them is a little bit unfair. I think it's true.
I think it's unfair to just say it's not black
and white. What do you think is the most important
ethical question we should be asking right now? So down
to earth questions. I think that so technology and and
and the you know, brain implants and dream control and
changing memories, they're in our life, but I think that
(50:13):
they're far enough from the next electoral cycles that we
don't have to worry about that. I think the biggest
effective technology stully outside of what we spoke about right now,
is how it affect relationships. I think that there are
countries already that you know, people have no sex, they
have less relationships with humans, They spend a lot more
time on their devices instead relationships. So right now everyone
(50:33):
speaks about screen time is just instead of social time.
But I think specifically with focus relationships, there's just less
and less people that find many flawships with others. And
I think this is the biggest kind of risk to
our world. As a neuroscientist, I say, our brain loves interaction,
it loves communication, and if people are not doing that enough,
(50:54):
they're actually hurting their brain. There really is a kind
of negative consequence brain wise to new not having a
person to talk to, to interact with, to rely on,
to have comfort with. And that's I think the biggest
thing that if I would to choose one solution to
advocate for, I would say, find a way to have
partners in your life that meaningful. I mean, it certainly
(51:15):
seems like a lot of people are utilizing their partner
as as kind of like the phone. Right, you just
did a study with hand. We're talking about it before
we started. Tell me about the study, Like, what is
the most interesting thing that you you got from this study?
The most practical thing that's never gonna work is that
it turns out that you would probably do better in
(51:37):
finding a partner if you outsource the search or partner
to someone else. Someone as could be a friend or
an AI. So we are our worst enemies when it's
custo making choices. We rely on the wrong cueues, were
too fast in judging negatively stuff when we judge positively,
we immediately compy the answers why, and we're very critical
of ourselves. If you wanted to kind of get advice
(51:58):
for dating on onlineay think I would say give you
a fone to your best friend and ask her or
him to swipe for you. What we say in the
paper is that in a way, it could also be
an AI, someone who learns a lot about your preferences
and actually start swiping for you and just says, lay,
I found this person that's a great for you. Don't
look at him before, don't git anything about him, go
to the date with him. But the big one that
(52:19):
I want to leave the audience with, is like start
trusting others, even interests that you think. I'm very very personal.
I'm kind of obsessed with this idea that in the
future we could have a I bought date for us.
Is that like totally sci fi? Or could that come
down the pipeline? So no one does it right now,
and I think they should. And I think that the
interesting thing that that drives that is that people don't
(52:40):
really know themselves as well as they think they have answers.
If you ask them, like we said before, if you
ask people why did you choose this, or that they're
gonna acco up with an answer, it's just that it's
not too For example, we know that a big driver
of your preferences is smell. There's a study that shows
that people come to the lab and someone shakes their
hands and have them sit down before the study begins,
but at study already began because the handshake is what
(53:02):
they were looking at. And what they show is that
within a few seconds from the moment someone shook your hands,
you're gonna bring your head close to your nose and
smell it. No one notices it's unconscious, but people do that,
and they did it in a very controlled way. They
had a heterosexual male shake the hand of an heterosexual
woman and then they shake their hand. But if it's
a main vest as a man in the straight and
not doing it, when the person who shakes your hand
(53:22):
words like love, no one smells it. But if they
don't do it like they really didn't control weight. And
the bottom line is that smells are really critical to
how we evaluate other people. We actually, you know, assess
the mix of our chemistry together in a very practical way.
No one's gonna ever tell you that I was gonna
say not what I really liked her because our mixture
of smells after she didn't shower for two days and
(53:43):
I didn't charge for three days, it's perfect alignment or bacteria,
and my body loves her bacteria, and we're gonna have
great babies together. People don't say that. They say she
was really funny and interesting and we share the same
love for the sport team. And this is something that
is driving out behavior that we don't know anything about.
Machines could do it for us, and they're gonna actually
know her biom, your biom, her little quirks that she
(54:04):
doesn't tell anyone and yours and could match you in
a way that t really alignes with your interest. And
if we go back to the beginning and say that
in relationships are one of the most important things that
we're starting to lose with a technology, this will change
a lot of how we see the world, to actually
make people remove buyers and to make people get new
ideas in their mind, and would protect our stories better
because as you show stories with other brains, you actually
(54:25):
have better chance of having accurate memories rather than once
to lose. Wow. But like you're sitting there inside people's
brains essentially, I mean, and you're you're talking about dreams
and whatnot. You could change people's minds already and ways
that are I mean, let's just be honest. You can
make some one more racist and their sleep right. What
(54:47):
are lab does is proof of concept, and we always
do it positively. But when I give talks, I always
tell people you can easily see how the same thing
can be used to make a person wake up and
eat more unhealthy food rather than healthy food, or become IMMO.
The sign doesn't tell you if it's good or bad.
The sign doesn't know the signs is just kind of
an objective tool. And my students, the NBA students I
(55:11):
teach it the business school often because they're young and
they're kind of millennials. They ask questions about ethics and
they say, wait, but this could be useful obviously terrible things,
And I say, I'm glad you asked that, because you're
right it can. And since you ask that, you have
the more obligation to remember your question. Twenty years from
now and you're gonna be the CEO of one of
(55:31):
those companies and you're gonna have this quarterly decision whether
you're gonna use those techniques to make someone buy more
of your Captain crunch or stop it because you know
that it's not what they wanted. Unfortunately, so far, the
world doesn't look like it's embracing those ethical ideas as
what they want. But I think that the new generation
(55:53):
is a doing a good job and putting it on
the table again and again like we do in this podcast,
is making people think and maybe change their behavior. Spend
enough time with morn and you begin to question everything.
But maybe that's the point of this, maybe the next
iteration of this messy phase of technology where truth has
(56:15):
taken on its own meaning. The point is not to
just question the tech companies the lines of code we
see in front of us. The next phase is questioning
ourselves and our own thoughts. Understanding how malluable our own
minds are is the first step in changing behavior, which
is important because it's the first line of defense of
what's coming down the pipeline in an era where the
(56:36):
lines between true and false and real and fake have blurred.
I'm Laurie Siegel and this is First Contact. For more
about the guests you here on First Contact, sign up
for our newsletter. Go to First Contact podcast dot com
to subscribe. Follow me I'm at Laurie Siegel on Twitter
and Instagram and the show is at First Contact Podcast.
If you like the show, I want to hear from you.
(56:57):
Leave us a review on the Apple podcast app or
wherever you listen, and don't forget to subscribe so you
don't miss an episode. First Contact is a production of
Dot dot Dot Media. Executive produced by Lorie Siegel and
Derek Dodge. Original theme music by Zander Singh. Visit us
at First Contact podcast dot com. First Contact with Lori
Siegel is a production of Dot dot dot Media and
(57:18):
I Heart Radio