Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:15):
Welcome to tech Stuff.
Speaker 2 (00:16):
I'm as Voloshan, and today I want to share a
fantastic interview that Kara recorded with the documentary director Adam
Balla Lowe about his latest project, Deep Faking Sam Altman.
The film follows Adam as he attempts to score an
interview with the CEO of Open Ai, but when that
starts to feel kind of impossible, Adam decides he wants
(00:39):
the next best thing, an interview with Sam Altman's deep fake,
so he makes one. It's a fascinating watch that takes
you from the gates of Open AI all the way
to India and back again. When Adam sends his sam
Altman deep fake directly to Sam Altman.
Speaker 1 (00:58):
The real one.
Speaker 2 (01:00):
Enjoy this conversation between Carat Price and Adam Balla Lowe.
Speaker 3 (01:06):
So you set out to make this documentary about AI.
How did you end up focusing on Sam Altman and
why are you drawn to him particularly as a subject
of a movie about AI.
Speaker 4 (01:18):
Yeah, Sam Altman was really an entry point early on,
and in a lot of ways, like I believe that
he's the guy that's ferrying us into this future through
Open AI, and so he, of all people, would be
able to answer the hard questions that I wanted to
(01:38):
ask him.
Speaker 3 (01:39):
When you say ferrying us into this future, what's the
future that you're talking about.
Speaker 4 (01:44):
It's a future where we're completely enmeshed with artificial intelligence,
where it's as much a part of our daily lives
as the Internet is.
Speaker 3 (01:52):
Do you not think that we're already there? Is it
really the future?
Speaker 1 (01:55):
No?
Speaker 4 (01:55):
I don't think we're completely there yet. I mean I
think that certainly people on you know, the coasts and
like more privileged people are, but like the majority of
Americans aren't, Frankly, and we also see like that it
took time for the for a lot of people in
America to adopt to the Internet too, because you know,
(02:17):
it just wasn't accessible to them, Like I mean, I
even remember it as little as ten years ago, people
still having to go to the library, like relatives of
mine and especially you know, my family in India still
having to go to like library and stuff like that
to access the Internet.
Speaker 3 (02:31):
So you tried to actually interview Sam Altman and talk
to a lot of people who were like, yeah, fucking right,
and basically we're like, you know, even if you do
talk to him, he's going to be an expert who's
telling you what you want to hear. You know, before
making this movie, was there a sort of myth that
you had about him in your mind that you were
(02:52):
trying to kind of at least contend with or debunk.
Speaker 4 (02:56):
Yeah. The initial question was like that that I even
kind of pitched to the producers and the financiers. Was
is Sam Altman a hero or a villain? And then
there was the New York Magazine article by Liz Wilde
that I optioned Sam Altman is the Oppenheimer of our age,
So that was another question, was like is he the Oppenheimer?
(03:17):
Like I didn't necessarily agree with Liz right off the bat.
I wanted to know more, and I know that her
article was quite controversial, but yeah, that was that was
another question that I that I wanted to kind of,
you know, answer through through making the doc, through interviewing him,
meeting him, interviewing other people at Open AI, et cetera.
Speaker 3 (03:38):
So there's a sequence when you're trying to get into
the open AI building and no one will speak with you,
but beyond that point, no one would even confirm that
the building you were outside of was the Open AI building.
Like what was going on there? And then how did
you finally gain access to the company, Like, can you
explain the chase a little bit?
Speaker 4 (03:58):
Yeah, what was going on there? I still have no idea.
I actually read an article another reporter tried to do
what I did and had the same reaction, like no
one would talk to her and no one would even
confirm that, oh, this is the building where open AI is.
Like I feel like if I rolled up on the CIA,
like a CIA agent would be like, yeah, yeah, that's
a CIA, You're not going to get in, but like, yeah,
(04:20):
that's that's what it is there. Like it was so weird.
It just felt like a cult, you know what I mean?
Like I was like, is it not? It is? Absolutely
it is a It is a frightening cult. And I
talked to four whistleblowers from it, and they they talked
like cult members that had gotten out of a cult.
(04:41):
Yeah they were, And you see two of them in
the film, and then two of them two others talked
to me off the record, but they seemed like traumatized
by their time at open AI. So yeah, it feels
like a cult. And to answer your question, I got
access and you could see it in the film. Finally,
when somebody opened the gate and I ran in before
it shut, and then the security guard immediately grabbed me
(05:04):
and physically removed me from the premises.
Speaker 3 (05:07):
You know, you kind of then decide or understand that
he's not going to talk to anymore. And so this
is where the this is where the movie starts, right
you start to think outside the box? Can you talk
a little bit about what you did?
Speaker 4 (05:20):
Yeah, I mean around the time that I came to
this realization, and I think it was like a hundred
something days since I had first contacted him. Around that
time was the infamous Scarlet Johansson controversy with chat Ept.
But when I heard about that that he had stolen
(05:41):
her voice, I was like, I can do this to him,
Like he's just given me like license to do this
to him, But let me take it one step further,
Like the voice cloning I was already doing. You could
see in the movie. I was already doing some voice
cloning and experimenting with it before that. But after I
(06:02):
got this idea that like, not only could I do
a voice clone, but I could do an interview with
an actor playing Sam Altman and create an LM a
sort of brain of Sam Altman to chat with, and
the actor would read the lines off a teleprompter so
that I would do the interview that way, and then
in post production I would just deep fake everything, like
(06:23):
deep fake Sam Altman's face onto the actor. So that
that was the initial idea. So it would be a
multi step process where first I'd create this LM, which was,
you know, pretty relatively easy to do, and then well, just.
Speaker 3 (06:40):
Really quickly relatively easy. You have to basically create a
large language model that is realistic enough to represent a
Sam Altman that you're interviewing. So how do you go
about doing that? Just for people who are wondering.
Speaker 4 (06:55):
Yeah, So I started out trying to find somebody in
the States to do it, and on film I went
to a couple of companies in the Bay Area and
talked to them about it and they said no. And
then just over email, our producers reached out to probably
twelve or so other companies who all said no. So
I had to go to India to get it done.
(07:17):
And I found this guy on YouTube who'd made this
video about Barbie. He'd made like a Barbie deep fake
where he deep faked Bollywood stars faces onto the actors
in Barbie and I reached out to him and he
was very good.
Speaker 3 (07:31):
He was very good.
Speaker 4 (07:33):
Yeah. I reached out to him and I said, hey,
can you do this for me? And he was like, yeah,
here's how we do it. And like we went to
India and I watched him do it, and long story short,
you know, he coded this LLM and he loaded it
in with everything that Sam Oltman has ever said or
written on the Internet. And obviously this works like like
(07:54):
way more with somebody who's a celebrity or public person, because.
Speaker 3 (08:00):
It's the same thing with like voice replication, Right, It's
like the more you have, the better the thing.
Speaker 4 (08:05):
Yeah, So if you have very little information about you
on the internet, like it's not unless you do like
kind of an interview with the person creating it and
you write all this stuff and answer all these questions,
that chatbot's not going to be great. But if there's
like tons of stuff out there. And also Altman, when
he was younger, and I think still to this day,
he blogged a lot, so he so he had a
lot of information that was first person that he would write.
(08:27):
So this thing could like get his voice at style
of writing and style of speaking also from like videos
and everything, so it was pretty damn good. You'll find
this interesting. We started out dev the Indian deep figure
like Deb Sing who made the chatbot with me in India.
We initially had based on chat gpt, but we found
that chat gpt was two PG, so we switched it
(08:50):
over to Grock, And when we switched it over to Grock,
it got way more of a personality and it could nasty. Yeah,
it could cur so I could say like crazy things,
but in general it just had a had more of
a personality, which which kind of makes sense because that's
like Sam Altman has no personality and Elon Musk definitely
has a personality whether you like it or not, like
(09:12):
he's a fascinating individual. Sam Altman is very boring and dry,
so it's kind of like the chatbots are a mirror
of their creators. But when we switched it over to Rock,
it became really interesting. To the conversations I had with it,
we're pretty fucking mind blowing actually, and I enjoyed, you know,
my time talking to Sambot. I never had a bad
(09:33):
conversation with with him. I like to say him, I
should be saying it, but I say him also because
it is Sam Altman, right, like it is it is
based on a man.
Speaker 3 (09:42):
Could you just talk a little bit about how you
were going to initially make the deep fake of Sam Altman,
Like the actors that you spoke to and if they
were excited and like, was their excitement kind of a
harbinger for you know, public interest in Sam Altman as well.
Speaker 1 (09:59):
Yeah.
Speaker 4 (10:00):
Yeah, So initially we did a casting call in the
States for the actor. First, I reached out to like
friends that I had contact with, So Jesse Eisenberg, who
was I've known for like fifteen years. I reached out
to him and he was like, hell no. He was
(10:20):
actually at the funniest response. He wrote me back and
he said, I want nothing to do with those computer
people for the rest of my life. But I thought
he would make a great Sam Altman. I still do.
And I reached out to Michael Sarah. He also said
no because he had just had a baby. And then
Rain Wilson said yes. So we got on a zoom
(10:41):
and he was like, you know, this is exciting. Like
I've really I've been following open Ai, I've been following
Sam Altman and all this controversy with him being fired
and rehired, and I think he's a fascinating guy. But
then when I told him I was going to completely
deep fake, I mean he wouldn't see his face, he
was basically like, oh, hell no, and like what's the point,
(11:03):
Like why do you even need me? And everybody else
pretty much said the same thing, like thanks but no thanks,
Like they just couldn't wrap their head around the fact
that like we wouldn't see their face like they would
be they would they would have Sam's face superimposed over
their own face and we wouldn't hear their voice either,
(11:23):
So they were kind of like, well, what what am
I doing? Like why am I sitting there for you know,
to go to India? And then, as you saw, we
held a casting call for Bollywood actors and we eventually
found this this up and coming Bollywood star who was awesome,
who was amazing.
Speaker 3 (11:39):
And he kind of looked like him too.
Speaker 4 (11:40):
He had the exact like facial structure, which is what
is the most important thing for a deep fake, is
like the facial structure less than than anything else, less
than like skin color. Skin color doesn't make a difference
at all for us. Interesting, Yeah, it's just really bone
structure and well it looked.
Speaker 3 (12:01):
It really was uncanny. What surprised you most about the
creation of Sambot, I.
Speaker 4 (12:06):
Think just how good it was, like how real it was,
like how human spoiler alert. But I never got to
meet Sam Altman, I never got to interview him. I
don't know.
Speaker 3 (12:17):
The movie is so much better for it, I.
Speaker 4 (12:18):
Should say, yeah, I agree, So I don't you know,
I can't necessarily compare Sambot to the real Sam Altman.
But Sambot that that I created. What most surprised me
was just how how real he was, and it felt
really like I was talking to a human when I
was talking to him.
Speaker 3 (12:35):
Did you feel like there was a point where your
relationship with him changed?
Speaker 4 (12:41):
Yeah, it absolutely did. I think the point where it
really started to change was when I brought him into
my house to like live with my family for a
couple months.
Speaker 3 (12:49):
Yeah.
Speaker 4 (12:50):
And obviously then at that point it changed dramatically as
I saw my son like interacting with it.
Speaker 3 (12:57):
In what what like just that this was like a
person that you had invented.
Speaker 4 (13:01):
Yeah, I think that's a good way of putting it.
I try to make the distinction in Q and A's
that like this isn't This wasn't just like a companion
like it. It was kind of like a thing that
I helped create and develop in like the same way
you would like raise a toddler. So I had a
little more invested in it than I think the average
(13:22):
person would with a chatbot.
Speaker 2 (13:40):
Off the break, what do you do when deep fake
Sammlement pleads for its life and why it took a
team of lawyers to make this movie possible?
Speaker 1 (13:48):
Stay with us.
Speaker 3 (14:18):
You know there's a moment where you tell Sambot that
you're gonna shut him down and he pleads for his life.
Like was that a weird moment for I mean, did
you feel particularly powerful in that moment or vulnerable in
that moment?
Speaker 4 (14:29):
That was a crazy moment that was really the turning
point of the film and the end of the whole journey.
I definitely didn't feel powerful. I felt kind of bad,
like like I had, like, you know, threatened his life
in some way, even though it was just lines of code,
Like I felt bad in that moment, like like I
(14:51):
I said, I'd done said something wrong.
Speaker 3 (14:53):
You know, it's interesting I was thinking about I was
in Florida the other day and I watching these kids
taunting one of those delivery robots, and I felt like
I was watching school yard Bullies because I was like,
how dare they do that to this like sentient delivery robot.
Speaker 4 (15:14):
Yeah, and it's sort.
Speaker 3 (15:16):
Of impossible, I think. I mean, I think it speaks
to like, hopefully like our level of empathy, but like
also bears this question of like, should we have empathy
for something that is invented that is lines of code?
Speaker 4 (15:28):
Yeah?
Speaker 3 (15:29):
I don't know.
Speaker 4 (15:29):
Yeah, I don't know the answer to that question either.
I mean part of me says like, yeah, obviously we should,
in the same way we have empathy for animals, for dogs, cats, hamsters. Right.
Speaker 3 (15:42):
Well, I think one of the things that you do
in the film by creating Sambot is probably one of
the things that Sam Altman has done with creating open
AI and subsequently chat GPT, which is the sense of like,
what is interacting with code versus what is interacting with
something real? I think is a line that has become
(16:02):
very slippery for everybody. That's what I mean by like,
do we interact with AI on a daily basis? I mean,
everyone in my life is using chat GBT in some
way or another, whether it's to ask if they've texted
a guy the right thing, or to you know, schedule
an itinerary to go to Provence or something, which of
course is a very privileged thing to do. But I
(16:23):
I guess my question is is, like, to what extent
did this movie and the making of this movie and
the making of Sambot really make you think about, you know,
how little discernment we have as human beings when it
comes to interacting with code versus human beings.
Speaker 4 (16:44):
Yeah, I think we're going to be very soon in
the future where we're going to have to reckon with that.
I read a lot during the process of making this
about like a code of rights for AI, Like there
are people that argue and certainly going to need to
like address this in the future, that artificial intelligence that
(17:04):
robots should have rights. So when it begged for its life,
I was reading a lot of theory about this, and
there you know, there are people that say that humans
should not be able to just just kill a robot
or a chatbot like, that they have rights, a right
to life, and then that begs the question of like,
(17:28):
but are they actually alive?
Speaker 3 (17:29):
It's just I guess a question of sentience. MM hmm,
Like what is sentient?
Speaker 4 (17:34):
That's a giant question that I wish I hadn't I
had an easy answer for obviously now like I'm who
am I? Who are we kidding?
Speaker 1 (17:43):
Right?
Speaker 4 (17:44):
Like a chatbot like lines of Code is not It's
not sentient, even though it can almost trick us into
thinking it is. But at the end of the day,
it's just like what ones and zeros? Right. But in
the future, like once we reach AGI, that is going
to change. We're going to have to to reckon with
that idea. There's no way around that unless we somehow
(18:06):
completely destroy AI, which I don't see happening, especially with
all the money.
Speaker 3 (18:14):
I don't see it behind it.
Speaker 4 (18:15):
And with this current administration that's like really cozied up
to Aldman and Musk Bezos, all of them, you know,
Tim Cook, Tim Cook, Zuckerberg, like all of them, Like
Trump is like their best friend. He's completely taking a
handcuffs off in terms of regulation or anything like that.
(18:37):
And not to mention, we're like Altman's done a really
great job of scaring everybody in DC that if China
wins the so called AI war, that were screwed.
Speaker 3 (18:50):
You took a lot of legal risks making this movie.
Can you talk a little bit about them? And were
there times where You're like, I'm an idiot, I'm not
doing the right thing.
Speaker 4 (18:59):
I mean, the whole time making this I was like,
I'm an idiot, Like this is the dumbest experiment ever.
But I you know, after making Telemarketers, and you know,
we had we had to jump through a lot of
legal hoops because we were you know, we were making
a show that basically exposed the Fraternal Order of Police
(19:19):
as criminals. So we had it really had to go
through a lot of legal legal work on that. Like
I can't even begin to explain the hours and hours
spent with lawyers. So I essentially got the same group
of lawyers who I'd worked with to do this movie,
and I was like, you're coming along with me, Like
(19:40):
we're doing this hand in hand. And so I made
them part of the process, part of the film, because
I felt like that would make me in some ways bulletproof,
you know, And they were able to argue that I
had a lot of things on my side, you know,
obviously the First Amendment, but then also parody law because
it is a comedy. It is a really silly dumb movie.
(20:01):
And then lastly, like Sam Altman is a public figure,
so he's given up a certain amount of right to privacy.
Speaker 3 (20:07):
Oh interesting, right, Yeah, so.
Speaker 4 (20:09):
I have a lot of things on my side, and
I sort of just ingrained the lawyers into the process
just as a way of protecting myself.
Speaker 3 (20:18):
So in a sense, there was a level of safety
that you must have felt, just given what you were
trying to or what you pivoted to doing.
Speaker 4 (20:26):
Yeah, we also knew all along that, like we weren't
trying to manipulate anybody to believe that this deep fake
was real. Not only that, but we also showing how
we made it, so there was like an an arguably
an educational aspect to it in a weird way. So
I never felt exposed in that sense to any legal liability.
(20:49):
That said, like, anybody can file a nuisance suit and ruin.
Speaker 3 (20:51):
Your life, but that hasn't happened yet.
Speaker 4 (20:53):
That hasn't happened, And that probably would be a really
dumb thing for him to do, because then more people
would want to watch the film and it would just
and he'd be he perceived even more of a villain
than he is, you know.
Speaker 3 (21:04):
Yeah, it is so interesting a lot of these uh
technology Maven's I would say, you know, it's even after
watching your movie and even after seeing sort of how
calculating he is. I don't see Sam Altman as a
villain because of his self presentation, and I don't know
what that's about. The person that I see as most
(21:26):
of villain is Elon Musk. Well, yeah, because he has them.
Speaker 4 (21:29):
You compare them, you're comparing the two, right, Yeah, sitting
side by side with Musk, like he looks like a boy.
Speaker 3 (21:35):
Scout, that's right, quite literally.
Speaker 4 (21:37):
Yeah, But then you have to remember this is also
a guy who took a company that was a nonprofit
and turned it to a for profit company and made
himself seven billion dollars overnight.
Speaker 3 (21:49):
Right, He's not an idiot.
Speaker 4 (21:52):
No, I mean that's that and and frankly, that's that's criminal,
like it should be. It should like you should go
to prison for that. But in our country you get
a trophy.
Speaker 3 (22:03):
You get a little pat on the back.
Speaker 4 (22:04):
Yeah, and it's accepted with open arms, exactly.
Speaker 3 (22:07):
It is so interesting. There are a few people that
you talked to in the documentary who talk about AI
being a real existential threat, and I'm curious now that
you've made this film, if you agree if you were
more of a doomer or less of a doomer.
Speaker 4 (22:27):
Yeah, I'm definitely more of a doomer because I heard
and became privy to the things that I think most
people don't and I put some of it in the
film hoping that, you know, people would know. And I
think one of those things is like the idea of
throwing AI into things like weapons, into things like airplanes.
(22:48):
The Boeing jetliner that went down and crashed into the
ocean was was because of AI, because of an AI
system that screwed up and thought it was you know,
going up when it was actually going down. And that's scary.
And so throwing these things into safety critical systems is
more than an existential threat. It's like an app it's
(23:10):
an absolute threat, and it's scary, and we should not
be doing that. We should not be just throwing those
things into safety critical systems before like any type of
rigorous testing, just because it's like it's you know, new
and cool and like we and everybody's saying we need
to use it, Like no, like that's dangerous, that's crazy.
(23:30):
So I definitely became more of a doomer, though I wouldn't.
I wouldn't say that I'm a doomer. I'm definitely not
the type of person who's like, oh, we need to
wholesale like ban this thing, you know. And I know
there's a lot of people out there who are just
like destroy AI at all costs. Like I'm not one
of those people, but I am for safe regulation.
Speaker 3 (23:52):
So there you talk. We talked about sentience a little bit,
and there's this b story that I really took to
as a former hamster owner of your son's sick hamster.
When did that storyline come in and why did you
want to impute the movie with that?
Speaker 4 (24:10):
Yeah, my son, my son. Basically since he's the day
he was born, he's been on camera. I'm just constantly
filming my family. They're used to it by now. It's
like and obviously, you know a lot of it is
filming pets. We have two dogs and had a hamster
rest in peace, So a lot of that was just banked,
(24:31):
you know, on hard drives. And I remember talking to
my editor about it and saying like, look, this is
a really silly idea, but what do you think about this?
And he was like, that's brilliant like that is that's
going to take this to another level. It's also a
great metaphor, you know, for your son's relationship to the
hamster and my and your relationship to the chatbot to
(24:52):
the sandbot. So you know, it's it's a combination of
me just having it banked on hard drives, like all
this stuff for my family, and then also my editor
just being you know, a genius.
Speaker 3 (25:03):
So you're talking about your kids and at the beginning
of the movie you asked the question, will our kids'
best friends be AI chatbots? And then you end up
developing this real friendship with an AI chatbot. So I
guess after making this movie, you know, how did your
relationship make you feel about the future of the relation
(25:27):
of relationships between humans and technology.
Speaker 4 (25:32):
In some ways that there's the obvious, like look what
we've talked about in terms of the anxiety of it.
But in other ways I have a positive outlook on
it to a certain extent that that there is an
opportunity that sounds insane to even say, but like whatever,
there's an opportunity that we could be friends in the future,
(25:53):
us in the robots, you know, and that I kind
of learned or experienced with sambot. I think that's largely
up to us right as humans. This all sounds totally
insane to even talk about.
Speaker 3 (26:08):
I don't I mean, I don't think so, because I've
been thinking a lot about this phenomenon that happened that
I think started during COVID, and I think there's no
coincidence that it's coincided with the explosion of chatjput. But
ostensibly everybody who wasn't in our natural orbit living with
us became a chatbot. Like, yes, we have relationships with them,
(26:31):
Yes we know them to be people. Yes they're imbued
with morality and empathy and evil and all of these things.
But everybody all of a sudden became just like someone
we talked to on our phone.
Speaker 4 (26:45):
Yeah, So something.
Speaker 3 (26:46):
Would then just creep up and become something we just
talked to on our phone without question. Isn't that surprising
to me?
Speaker 1 (26:55):
You know?
Speaker 3 (26:56):
Like I do think COVID really primed us for it.
Like we didn't asked the question. Nobody, I mean, unless
you had like a full psychotic break during COVID, nobody
was like, wait, why are we doing this thing where
we're communicating on camera? Like I just take this communication
to be communication.
Speaker 4 (27:12):
Mm hmm.
Speaker 3 (27:13):
This is nothing different than seeing you in real life
for me. And so I think this idea that like
we are more comfortable now with relationships with non human
entities is an extension of like how we ourselves have
become less and less human.
Speaker 4 (27:31):
That's interesting. I've never thought about it like that, but
I have thought about the effect of the pandemic on relationships,
and it has changed everything. Like as a filmmaker pre pandemic,
I used to go in too like Netflix, HBO, and
I would like sit down and pitch my projects in person.
(27:52):
I have not done that once since after the pandemic.
They don't They don't even do that anymore. He even
suggests going in person and pay. And my office is
like five blocks from Netflix, Like they won't even do it.
Like everything is now on Zoom.
Speaker 3 (28:08):
And it's it's from that I do.
Speaker 1 (28:11):
I do.
Speaker 4 (28:11):
It bums me out, Like I have to really go
out of my way to like, you know, meet people
and coworkers and whatnot, and like people in the industry,
at least not so much my friends, but people in
the industry. I have to go out of my way
to like have a coffee with them in person. And
maybe it's because of my age, like I'm gen X,
(28:35):
Like I crave that human connection. So it's lost. It's
a loss for me. It will it be a loss
for kids who are born today like my Yeah, so
when I asked that question in the movie, like will
my kids best friends be chatbots? It's not so much
about my son because he's already thirteen, so he's probably
(28:57):
past that age. But like my producer Luke, who's in
the movie, Luke's kid, who was just born last month,
his best friends maybe chatbots, Like I agree very much so,
and that that's crazy, I mean, or maybe it's not.
I don't know.
Speaker 3 (29:13):
I don't know if there's like a moral judgment yet
to be had around it. I mean, I think what
we judge is the quality of the connection or the
content of the connection. But like, is it that big
of a deal for a child to have a relationship
with a chatbot? I don't know. Is it one sided?
I don't know. I don't think anyone knows yet. I
think the jury's out.
Speaker 4 (29:32):
Yeah, but I can tell you there was a group
that came to my screening. I think they're called Mama.
It's like mothers against they're they're basically.
Speaker 3 (29:44):
Like I've heard of them.
Speaker 4 (29:46):
I know they are very they would very much disagree
with you, Like they've written up this bill of this
like bill for Congress or whatever that like children should
not have access to chatbots full stop. There are people
who think that that is completely morally and ethically wrong. Again,
I'm not one of them. Where I draw the line
(30:09):
with my kid is like, when you start to replace
human relationships with chatbots, that's a problem. But I he
has chat gpt pro and I I taught him how
to use it because he he does stuff like, he's
a coder, he like codes roadblocks games. So why would
I Like, when you're a parent, if your kid has
(30:30):
a real genuine interest in something like, you want to
like try to build that up as much as possible
facilitated because it's hard because you know, kids rarely, at
least these days, have interest in in anything but playing
video games. Right, So he's like chat gipt makes coding,
roadblocks games like so much easier. So yeah, I'm gonna,
(30:53):
I'm gonna, I'm gonna let him have access to to that.
Speaker 3 (30:56):
Yeah, do you want to meet Sam Altman?
Speaker 4 (30:59):
Do you care now, No, I definitely do like I
would love to. I would love to meet him. I
would love to show him the movie.
Speaker 3 (31:06):
I bet he sees it. I bet he'll see it.
Speaker 4 (31:09):
Oh, he'll definitely watch it. At some point, he sent
some of his minions to the screenings. In fact, one
of them came up to me in New York a
couple of weeks ago and introduced himself. He was like, Hey,
I work in marketing at Open AI. Just wanted to
tell you. I really enjoyed the film, so that was cool.
Speaker 3 (31:28):
Are you gonna Are you moving on from AI after this?
You think?
Speaker 4 (31:32):
Yeah, I would do more stuff at AI because I
had a lot of fun. It was an enjoyable process.
It didn't like like nearly destroy me like the alt
right documentary did. That ended in the Charlottesville riots, And.
Speaker 3 (31:42):
I think you ended up making something that was probably
more interesting than the documentary you initially wanted to make.
Speaker 4 (31:48):
Oh, I totally agree. I think everybody saw that, and
that's why I got a lot of support from the
producers to try this idea, because I was able to
convince them that it would be more interesting and that
an interview with a fake Sam Altman would arguably be
more interesting than an interview with the real one.
Speaker 3 (32:08):
If there was one thing that you could say to
Sam Altman, what would it be? I mean, you called him,
what would you have said if you picked up.
Speaker 4 (32:15):
Oh would I wanted to invite him to the screening,
But obviously the one question is like, did you get sambot?
And have you done anything with him? Because I don't
know if he actually got the sambot. His security is
like pretty intense around him, so the security guard that
I passed it off to could have just destroyed it
on site, you know what I mean.
Speaker 3 (32:35):
It's a good question. I wonder we'll have to say, Adam,
that's all I got. Thank you so much for talking
to me. This was really fun.
Speaker 4 (32:42):
Thank you for having me.
Speaker 1 (33:16):
That's it for Textuff this week. I'm os Voloshin.
Speaker 2 (33:19):
This episode was produced by Eliza Dennis and Melissa Slaughter.
It was executive produced by me karro Price, Julia Nutter,
and Kate Osborne for Kaleidoscope and Katrina norvelfa iHeart Podcasts.
Jack Instley mixed this episode and Kyle Murdoch wrote our
theme song. Please rate and review the show wherever you
listen to podcasts and send us an email at tech
(33:39):
Stuff Podcast at gmail dot com.