Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
That's mind buggling. You know the fact that you can
have this conversation. You can say something like you'll meet
the love of your life to a computer, and it'll
treat you better if you say it to it.
Speaker 2 (00:16):
Welcome to the one you feed Throughout time, great thinkers
have recognized the importance of the thoughts we have, quotes
like garbage in, garbage out, or you are what you think,
ring true. And yet for many of us, our thoughts
don't strengthen or empower us. We tend toward negativity, self pity, jealousy,
(00:36):
or fear. We see what we don't have instead of
what we do. We think things that hold us back
and dampen our spirit. But it's not just about thinking.
Our actions matter. It takes conscious, consistent, and creative effort
to make a life worth living. This podcast is about
how other people keep themselves moving in the right direction,
(00:57):
how they feed.
Speaker 3 (00:57):
Their good wolf. Thanks for joining us. Our guests on
this episode are Ian Thomas and Jasmine Wong. Ian is
(01:18):
an internationally best selling poet and Jasmine is a globally
recognized researcher and innovator. As the subject of their new book,
they prompted Ai GPT three with the wealth of humanity's
most cherished works. They then ask our most pressing questions
about life, and they show us how artificial intelligence responded.
(01:38):
The book is What makes Us Human and Artificial Intelligence
Answers Life's biggest questions.
Speaker 4 (01:44):
Hi, Jasmine, Hi Ian, Welcome to the show.
Speaker 5 (01:47):
Thank you so much for having us.
Speaker 6 (01:48):
Yeah, glad to be here.
Speaker 4 (01:49):
Yeah, I'm really excited to have you guys on. I
was saying to you before we started that, you know,
AI is on everybody's mind a lot. I think it's
on everybody's mind who's certainly works in any sort of
creative field, but really in any field, and I think
there are tremendous promises and perils that go with AI,
and I've been looking for a way to have a
(02:10):
conversation about it. And you guys wrote a book What
Makes Us Human, which you basically ask GPT to answer
questions based on some of the world's wisdom texts, And
we're going to get into that in a second, but
for listeners, I'm just kind of setting up that we're
going to be talking about AI sort of in general
and then transitioning to that as we go. But before
(02:31):
we start, we always start with the parable, and it
goes like this. There's a grandparent who's talking with their
grandchild and they say, in life, there are two wolves
inside of us that are always at battle. One is
a good wolf, which represents things like kindness and bravery
and love, and the other's a bad wolf, which represents
things like greed and hatred and fear. And the grandchild
stops think about it for a second. They look up
(02:52):
at their grandparent and they say, well, which one wins?
And the grandparent says, the one you feed. So I'd
like to start off by asking you what that parable
means to you in your life and in the work
that you do.
Speaker 1 (03:04):
I think it's a particularly pertinent and parable right now,
especially as it relates to AI and the conversation around AI.
We've had quite a journey with this book, I think,
simply because when we started promoting the book, chat GPT
hadn't come out yet, and so a lot of time
was spent saying there is this thing called AI that
(03:26):
is coming and you should be aware of it and
it's going to come out of nowhere, and a lot
of conversations and podcasts that people going, oh, and so
how does it work and then post chat GPT going
everyone needs to calm down, like it's going crazy. And
now we're in this kind of third moment where people
are perhaps justifiably very worried and very anxious, and so
(03:46):
the idea of the one you feed or the things
that you pay attention to get bigger seems very pertinent.
Speaker 6 (03:52):
You know.
Speaker 1 (03:53):
I've always said to my kids, like, you know, life
is like a bicycle. You end up going towards the
thing that you're looking at, you know, so you need
to be very careful about what we're paying attention to.
And I have a big concern right now, I think
in society broadly, where the thing that we're paying attention
to in AI sometimes feel like the wrong things, and
it feels like there aren't enough people kind of showcasing
(04:16):
what the right things could be. You know, I said
that we're in this kind of third phase right now.
The other day at south By Southwest there was a presentation,
you know, a movie, and before it, there was a
kind of teaser promo about all the AI panels and
conferences going on in south By Southwest, and the crowd booed,
you know, they booed quite loudly and it's because there
(04:38):
has been a lot of conversation around AI. I think
from a point of view of this is going to
be a tool to destroy jobs, this is going to
be a tool to create maximum efficiency within a capitalist system,
because that's the nature of the world that we live in.
And some of that's true, and to ignore those is naive.
(05:01):
But there's a lot of new things that it unlocks,
a lot of really profound, really interesting things. I guess
my big concern right now is that when it comes
to AI, we are giving the wolf that represents fear
a lot a lot to eat, you know. I also
want to make it clear I don't think it's true
(05:22):
for you either, judgment Like, it's not like we're AI
idealists and we think that this is some magical technology
that's going to solve everyone's problems. But there are these
incredible opportunities that we're excited about, or at least we
want to explore, you know, and show at least with
my work, like, here are these incredible things, you know,
and that's exciting Jasmine.
Speaker 6 (05:43):
Do you want to build on that? Do you want
to throw something in there?
Speaker 5 (05:46):
Yeah? I worked at Opening Eye while they're working on
JIPD two, so a few generations ahead of GPP four.
It's the same model underneath, just trained on a lot
more data. And one recurring theme that coming up in
the office was why I was so positive. I was
always really excited to be working on what we were
(06:07):
working on, and I think that still remains true today.
I think a lot of people come into the work
on artificial general intelligence coming from the same sort of
fear that Ian is talking about, like fear of existential risk,
fear of AGI being misaligned with our values, And I
think those are really valid and important places to come
(06:29):
from and go to the voting booths with. But in
terms of our day to day productive lives and where
we put our creative energy, I really hope people try
and utilize AI to benefit like their own creative processes,
to see how they could like interact with this technology
that yes, is like going to transform work as we
(06:53):
know it. But I think the main question that I
would ask people to ask themselves is like, how can
I be involved and influence the trajectory of such technology,
especially as mechanisms for collective governance of such technologies are
being built out?
Speaker 4 (07:08):
Yeah, I love that listener as you're listening what resonated
with you in that. I think a lot of us
have some ideas of things that we can do to
feed our good wolf, and here's a good tip to
make it more likely that you do it. It can
be really helpful to reflect right before you do that
thing on why you want to do it. Our brains
are always making a calculation of what neuroscientists would call
(07:32):
reward value, Basically, is this thing worth doing? And so
when you're getting ready to do this thing that you
want to do to feed your good wolf, reflecting on
why actually helps to make the reward value on that
higher and makes it more likely that you're going to
do that. For example, if what you're trying to do
is exercise, right before you're getting ready to exercise, it
(07:52):
can be useful to remind yourself of why, for example,
I want to exercise because it makes my mental and
emotional health better today. If you'd like a step by
step guide for how you can easily build new habits
that feed your good wolf, go to Goodwolf dot me,
slash change and join the free masterclass. I think that
(08:13):
gives us a couple of really good places to sort
of start from and I think it's easy to imagine
the AI risks, right. I mean, there, as you said,
there's the existential ones, like these things become more powerful
than us and turn us into their slaves. I mean,
whatever they are, there's countless of them. Right. And then
the other one that I think is probably most in
front of everybody's mind right now, is being replaced from
(08:36):
a work perspective. So I think the fears are pretty
easy to articulate. Can you articulate some of the promise
of AI? Like what in the short term, let's, you know, say,
the next present to three years sort of ways is
AI going to make our world better beyond helping me
build better marketing plans or write better emails? Right, Like,
(08:58):
of course, you know I can use it to help
me write things, right, but what are the other ways
that this tool is going to make life better in
the short term?
Speaker 5 (09:05):
I get to dive deep into maybe an education use case.
One thing I'm really excited about in the future of
AI and what it brings is the ability to bring
personalized tutoring to everybody. So one arc of technology, and
like how technology starts up and built is like take
something that has been classically only been available to people
(09:26):
in upper socioeconomics DRATA and try and democratize that for everybody.
So we saw that with entertainment, we saw that with music,
We saw that with a bunch of different like knowledge tools.
Technology is usually like funded and like priced in a
way that's like initially inaccessible, but like trickles down and
what AI provides here and I think as an interesting
(09:47):
counterpoint to the replacement narrative, AI might augment teachers in
a way that allows them to actually fulfill the ideal
version of their jobs. Right now, teachers are in classrooms
in most public school settings at ratios that are far
beyond the ratios that are like usually recommended in pedagogical research.
(10:09):
One of my best friends growing up in Edmonton, Alberta
is now an elementary school teacher, and the ideal ratio
as studied in pedagogy, so this might occlude some informations
just that weren't like even in the realm of economic possibilities,
such as one to one tutoring, But in a classroom setting,
the ratio that they found was ideal for ages like
(10:30):
five to seven was a ratio of one to less
than twenty, and the ratios that she's dealing with look
closer to one to thirty, which is totally unmanageable. Like
kids are falling behind, They're not getting the attention they need.
There have been some notable companies developing AI tutors that
hopefully will lessen the load on teachers, such as con
(10:52):
Academy with Conbigo. I've been working on a version of
this with my own startup, Trellis, And the hope, really,
I think is to achieve a socratic personalized dialogue with
every child. That's my dream. I really hope that someone
achieves it, and I think it would lead to one
of the hugest social upheavals potentially in society. Like imagine
(11:16):
everyone getting like instant feedback on their work, having a
dialogue partner that's like perfectly tuned to their areas of interest.
A kid that's like way out in like rural Ohio
gets to like research astronomy, which none of their teachers
know anything about, but they have access to this superhuman
AI that's like read in essence everything about astronomy. That's
(11:39):
one sci fi future that I'm really really excited about
and personally working towards.
Speaker 4 (11:44):
That's interesting because I have been involved in an AI
project similar to what you guys said earlier. You know, yes,
I certainly have all my fears and I'm like, but
it is here, it is not going away, So how
do we use it in useful ways? And it's very
much what you describe jazzm. It's company called Rebind and
they're a brand new startup. But the goal was the
founder had a lot of money and he wanted to
(12:07):
study great works of philosophy. But he would start to
read them and be like, I have no idea what
is going on here? Right, he had the money to
go find like some of the best scholars in the
world at universities and pay those people to tutor him
through it. But then he was like, well, okay, that's lovely,
but you know, ninety nine percent of the world's not
going to do that. And so what they're doing is
taking great books and they're marrying them with a scholar
(12:30):
to try and create, you know, so that you can
have a dialogue with this person. So you know, we've
had a guest John Kag on the show before, and
he may be one of the leading through scholars in
the world. Right, you'll be able to read Walden and
have a conversation with him about Walden and I got
engaged to do it on the Dowdy ching. I'm not
a scholar that I should be clear, but it's a
(12:51):
book I've loved and have engaged with for thirty plus years.
Speaker 5 (12:54):
It's my favorite book.
Speaker 4 (12:55):
Ah, So you and I should talk because I want
to talk about Trellis too afterwards. But I did my
own interpretation of it based on you know, lots of
different translations. But I think that's an example of what
you're saying there, Jasmine, about how we really can use
this to give better educational capabilities to individual children. That's
(13:16):
a great one, Ian, you have one you'd love to
throw forward to.
Speaker 1 (13:19):
Sure, you know, generative AI is getting a lot of
the attention right now because we can see it, you know,
we can see chat GPT, and we can engage with it,
and we can writing jokes, writing songs, things like stable Diffusion,
Dalli mid Journey. We can see these incredible pitches, you know,
and so that becomes the kind of focal point of
what AI is. But you know, the more I speak
(13:42):
to people or you know, read stuff around the medical
or the manufacturing or you know, the research kinds of
industries and institutions. They're miles ahead. They're miles ahead in
terms of the kinds of stuff that's happening there. You know,
there's these situations where people are using AI to discover
completely new materials. You know, things that would take people
(14:04):
years are happening in days in medical situations, the same
way that you could have a kind of you know,
customized teacher relationship, democratizing healthcare. You know, that's an incredibly
powerful thing. And as things like computer vision, the ability
to take a picture of a mole or you know,
someone to holding the phone to your chest and you know,
(14:27):
being able to hear the way you're breathing and having
a computer go you know what, you need to go
to a doctor and you need you need to talk
to someone.
Speaker 6 (14:33):
You know.
Speaker 1 (14:33):
Augmenting intelligence, which is a kind of another way of
interpreting what AI stands for, has profound promise when married
with humans. You know, humans bring empathy, understanding, intuition that
we have developed over hundreds of thousands of years. I'm
not quite sure a large language model will be able
to match you know, any reasonably time soon. You know,
(14:55):
who knows the way the way things are going. But
when you augment someone intelligence like you unlock an incredible future,
you know, especially when it comes to things like education,
like both of you are talking about, I have, you know,
a personal connection to Africa. I was born in South Africa.
I spent a lot of time working in Africa. You know,
different technology things with talcos whatever. There's a massive education problem,
(15:20):
you know, across Africa in a lot of different places,
and elevating that continent in a profound way will unlock
opportunity that is unimaginable. And I think that's what's really
profound about that continent specifically, is that, you know, one
of the things that kind of left Africa behind in
a very big way was the digital divide, you know,
(15:40):
when some people had access to computers, when some people
had access to the Internet. But one thing that Africa
is really really good at is leapfrogging, you know, and
a lot of countries in Africa, like they have better
cell phone connection, they have better mobile banking. A lot
of the time, mobile banking comes from Africa in a
very big way because was invented out of necessity. And
(16:02):
so there is this moment in this opportunity and we're
working with a nonprofit in East Africa right now and
an a cohort of entrepreneurs where you can augment the
intelligence that's there, you can complement that human insight, that
human empathy, that human understanding with this incredibly powerful tool,
(16:22):
and you can unlock prosperity in a way that we
cannot imagine, you know, And so that's really exciting on
a much shallower level. I guess I'm a writer. I'm
a poet, you know, I write all sorts of different
kinds of books. This unlocks entirely new ways of engaging
with culture. I've often compared it to the rise of
(16:42):
hip hop and turntabilism, where kids took technology and turntables
and samples and reinterpreted culture in completely new ways to
create one of the most powerful cultural forces in history.
When it comes to the emergence of something like hip hop,
and there's a similar thing that's going to happen. I'm
not excited about AI writing a book. You know, we
(17:05):
co wrote What Makes Us Human with GPT three. But
I'm certainly not excited about you know, ten thousand spammy cookbooks,
you know, appearing on Amazon, or you know, twenty thousand
terrible songs being generated or replacing illustrators or you know,
all of these different things. What I'm focused on is
what are the new kinds of conversations and experiences we
(17:29):
can have with culture around us that are interesting and different.
I think young people intuitively as always kind of understand.
Speaker 6 (17:37):
That it's there.
Speaker 1 (17:38):
The biggest AI platform on the Internet right now is
chat GPT, like there's you know, there's millions of people
that go there. Number two is character Ai, which is
a platform for young people to have conversations with fictional
AI characters. You know, they have so much traffic that
the site goes down regularly, like once or twice a day.
(17:58):
They have you know, millions of in funding, and they
have so much attention and so much traffic that it
still goes down. And that points to something for me,
that points to a desire to engage with the world
around you with literature, with in character AI's case, like
you know, fictional Super Mario, fictional Elon Musk or fictional
(18:20):
you know, whatever it is. But it points to a
world in which we engage with culture in a way
that is very different, and that's very exciting for me.
I'm a very curious person and I want to know
what that looks like, and I want to be part
of shaping that.
Speaker 4 (18:34):
That's really helpful from both of you. I want to
ask a question here because I try to keep things
practical in this show, and this conversation is a little
bit of a departure, but in an attempt to marry
that and ask each of you, what is one thing
that the average person out there can do today that
will help ensure that AI is developed and deployed in
(19:02):
a way that aligns with our best human values. Right
we talk about the worst versions of it, but one
of the concerns that a lot of people have is
there's no real governance in place right now. So what
is something that a person could do one sort of
small way of shaping the discussion around AI in a
(19:23):
positive direction.
Speaker 5 (19:25):
I wish there were more avenues for pushing for more
public involvement with AI. I think at a meta level
what people should be doing is pushing for mechanisms to
solicit public input, because I don't think those are currently
in place. Maybe one nonprofit I would mention that I
would really encourage listeners to support is the Collective Intelligence Project.
(19:50):
It's a nonprofit that was started by divs Sidarth and
Saffron Huang formerly of Microsoft Research and deep Mind, and
they're working on a collective intelligence mechanism to solicit opinions
from the public to inform both how like AI is developed.
For example, they worked together with Anthropic on their constitutional AI,
(20:10):
so Anthropic the way that the AI has trained is
a little bit different from opening. They train it with
a constitution which basically has a set of guidelines for
the behavior of the AI model. And the Collective Intelligence
Project were together with Anthropic to develop a collective constitution
(20:31):
where basically people like the common people, not research scientists
at the lab deliberated over questions such as should an
AI like make racist jokes and like things like this,
and developed a collective constitution to guide the AI's behavior,
which actually the public ranked as like higher performing than
(20:51):
the one that was developed purely by scientists, which is
a really promising sign. Unfortunately, this kind of development process
for AI happens rarely, Like these labs are training these
models only like every couple months or years and then
it's frozen for deployment. But I think an Intelligence Project
(21:13):
is one of the only organizations that's like doing this
sort of work right now, so I would encourage people
to look them up, read their work, support them. And
they're also working on some initiatives around collective governance and
redistribution as well, which I think are really promising but
are too early to talk about yet. They haven't released
anything there. But I think I've been disappointed in general
(21:34):
as to how much influence the daily person can have
over these transformative technologies that deeply shape our lives, and
I think it's one of the most important things for
people to be working on right now. So I would
encourage people to like either support existing initiatives or try
to think of ways to be involved themselves.
Speaker 1 (21:55):
It is really important that the public is involved, that
people do join in the conversation in whatever way they can.
I think, you know, I grew up in the nineties.
We had, you know, in the crazy weird hacker house.
I grew up in a very idealistic view of technology
and what the Internet was going to do. It was
going to usher in and utopian age of truth and
(22:19):
you know, access and all these incredible things because we'd
read a lot of science fiction novels and that's kind
of what we thought it was going to be like
and we were completely wrong, you know, consciously or unconsciously,
probably unconsciously. Society kind of said, the guys over in
San Francisco and Silicon Valley know what they're doing, and
we can leave it up to them, you know, everything
will turn out fine.
Speaker 6 (22:39):
And it didn't. You know, we have.
Speaker 1 (22:41):
Massive mental health crisises, we have misinformation campaigns, we have
you know, scams. It's broken my heart that a technology
that was such an integral part of my experience of
the world of growing up you know, in South Africa,
like disconnected from the rest of the world. The incident
was something that you know, created connection and it's profound
(23:03):
and that moment of connection has stayed with me my
entire life, and it's infused my work and everything that
I do. And to see it become something that people
hate is heartbreaking. And the same thing can't happen to AI.
It's too important. It's just too important for us to
leave it up to a few people to make the
important decisions about it. I think, you know, all of
(23:25):
Jasmin's suggestions make complete sense. I just think it's important
for people to be involved. Well, when you're representative or
whatever is speaking about whatever they're speaking about like raise
your hand and going, you know, well, what about AI?
How do you feel about that? Where are we on that?
Speaker 6 (23:38):
You know.
Speaker 1 (23:39):
One of the things that I do, I have my
own you know, innovation creative studio. I have a background
in marketing you know called Sounds Fun and we offer
a think called Sounds Right, which is effectively a kind
of AI one oh one for businesses and institutions where
we sit with them and go, you're doing this?
Speaker 6 (23:57):
Does this make sense?
Speaker 1 (23:59):
Is this actually going to resonate with the people who
work in your business with the people outside your business,
because otherwise you're going to have a village of people
with pitchforks outside and they'll probably be right to be
outside your door. Because people make these these kinds of mistakes.
So whether it's me or it's someone else, like, you know,
get someone into your business to talk about this and
(24:20):
to go, you know, if you're thinking about deploying this
kind of technology, how do you do it in a
way that's beneficial, you know, not just efficient, not just
like driving the bottom line because we kind of look
at business as something over there. One of my favorite
saying is you're never stuck in traffic. You are traffic.
We are all part of the system, and we all
have to make a conscious decision in terms of how
(24:43):
we embrace or don't embrace, or what we do and
how we deploy this kind of thing.
Speaker 4 (25:14):
Let's move towards the book a little bit and see
where that discussion takes us. I'm going to attempt to
summarize what you guys did in the book, and then
you can tell me what I get wrong. Basically, you
guys decided that you could train AI on certain key
I'll column wisdom texts right, the Bible, the Dowde Ching,
(25:38):
the poetry of Leonard Cohen. I love that you included that,
or the songs of Leonard Cohen, the poetry of Rumy,
these sort of things, and then you would ask it
questions about life. Is that the short version?
Speaker 1 (25:49):
It is the short version. I mean technically we didn't
train anything. We constructed a series of prompts, but I
mean it's a language thing ultimately, Yes, Jasmine, I think
that's right.
Speaker 6 (25:58):
That feels right to me.
Speaker 4 (25:59):
Yeah, Okay, to geek out here for a second. You
did not feed all of those texts into sort of
a AI off to the side that was only trained
on those things. It was still the publicly available GPT three,
and you did it with GPT three before there was
GPT four. Yeah, but you basically gave it a prompt
(26:22):
and you then gave it some examples of the answers
you wanted. Right, you were like, Okay, here's something from
the Bible, here's something from the Dowdey Ching, here's something
from Leonard Cohen, here's something from the Stoics. Then you
would say, okay, now answer my question on your own,
and it would have sort of taken those things and
it would be like, that's the sort of thing that
they want as an answer.
Speaker 1 (26:43):
We would construct patterns effectively using these different texts and
then ask the next question in the patent leave it black.
Speaker 5 (26:51):
Just to add on to that. Among other forms of
data like Reddit and Wikipedia, GPD three is trained on
the books one books to data set, and those comprise
like the plurality of the books that have been published
in the English language, so especially reference texts like the
Bible has been translated so many times, so are probably
(27:14):
like fairly well represented within that data set within the
base model, the publicly available GBT three model.
Speaker 4 (27:20):
Okay, and it's fascinating what this thing does, but real quick,
Ian talk to us about what got you started on
this project, because I think the origin story of it
is interesting, or is touching more than interesting, sure, you know.
Speaker 1 (27:31):
If you go back far enough, I think I've always
had a profound love of writing, you know, and that
I've expressed in collections of poetry numerous different things, and
that's been married with a love of technology and technology's
ability to connect us. I theorized at some point that
there would be a way to automate certain aspects of
(27:52):
what I was doing, and I didn't know what a
large language model was. I kind of had my I
on AI, but in a very disconnected kind of way.
And someone sent me an email one morning said there's
this thing called copysmith on product Hunt and you should
check this thing out because it's writing ad headlines. And
I looked at it and I was flawed. It was
(28:14):
an implementation of GPT three, and the universe kind of
opened up to me in a very big way. You
know something, whenever I go and lecture about writing, I
always say is good thinking is good writing, and good
writing is good thinking. And this thing could write, It
could write really well. And so the implication of that
was really profound for me, and so I didn't care how.
(28:38):
I wanted to be involved in this in some way,
shape or form. And so I found out who built
that product, and I video called Jasmine while she was
eating cereal in her kitchen in Canada and said, I'm
this guy called Ian and I've made all these weird
and wonderful books and I don't care how, but I
have to be involved in this in some way. And
(29:00):
she was like, sure, you can hang out, which was
which was, Which was really a life changing moment and
very kind of her. And so, you know, I kind
of helped out with Copysmith writing these ads and you know,
just you know, kind of playing with the marketing aspect
of what this thing could do. And at around the
same time, my mother had passed away. She had terminal cancer.
(29:24):
This was during the pandemic, and so I had that
terrible experience that a lot of people had where I
couldn't be by her side as she passed. And I
think that the desire to be by your parents side
when they passed is similar to the desire to be
with your child when they're born. You know, it is
this fundamental thing and I had this incredibly traumatic experience
(29:46):
where I drove across the country to try and just
be there on the last day of her life, and
I didn't get there, and I had to come home
and explain to my kids that, you know, grandma was dead,
like my mom had passed away, and it was a
brutal experience for them. And these two different things happening,
This exposure to this technology, this traumatic experience that I'd
(30:09):
been through, led to this moment where I realized that
if I could teach GPT three to write headlines or
write copy for the back of a shampoo bottle or
for a Facebook at or whatever, there were other things
that I could do with it. And so I started
trying these different experiments. And my mother was a very
spiritual person, very religious person. I'm spiritual. I don't think
(30:33):
I'm religious. And so one day I put in some
texts from the Bible, I put in some texts from
the dancee Ching. I put in some poetry from Rumy,
and I said, how do I explain death to my children?
And this poem came back. And I'm a poet, so
I understand what poetry looks like. And there was some
you know whatever that whenever you interact with like very
(30:55):
early large language model, there's all of these like weird
bits that are kind of there or whatever. But I
could see the angel in the marble, you know, I
could see that there was this thing there, and so
I asked it another question. I asked another question. I
asked another question. And then the next day I went
to Jasmine. I said, listen, I'm doing this thing, and
she was like, well, you should do it like this,
(31:16):
you know, and we kind of pushed the idea backwards
and forwards until it became this conversation, you know, with
all these kind of dark Knight of the Soul questions
that became this book and the Yeah, that's the very
long version. I apologize if that's too long. I'm sure
you're editor can find the right places to cut it.
Speaker 4 (31:34):
Not at all. It's a beautiful story, and I love
the way you sort of came to it out of
your own need for comfort and for understanding. And you know,
I mean, if there's anything that brings us up to
the edge of mystery but does not let us through
the curtain, right, it's death, right, you know, you get
there and you're like, well, I sent something profound and
(31:56):
mysterious is here, but I can't figure anything out. So
I thought maybe what we could do is before we
get into some of the answers, because I'd love to
actually just share with listeners a few of the questions
you asked and what AI said, and I've got a
bunch of them copied down. But before we do that,
I'm wondering if you guys could do something for me.
(32:17):
And in the book, you say, if there's one theme
that emerged again and again from our questions, from the answers,
from the vastros of sacred data the AI was analyzing,
it was this love, love is everything. So that's one
thing that you sort of took away as a theme,
But could you expound upon that a little bit, and
perhaps with some time and further reflection, you've thought of
(32:41):
some other things that sort of came out of that.
Speaker 1 (32:44):
For me, there were three things, and one of them
one of them was love. You know, love is there
in spades in terms of the kinds of responses that
we were getting. The other two were connection specifically to
the present mind, like coming back to the present moment
again and again and again and again. You know which
(33:04):
rings true? In a lot of philosophy, being aware of
the present moment, you know, finding fulfillment in the present moment.
You know, there's that wonderful saying anxiety is living in
the future and depression is living in the past, you know,
And that's there a lot which is unsurprising, you know,
I mean, considering the stuff that we fit into it,
it's not that profound, I guess. And then the other
one was connection in a much broader sense, like connection
(33:26):
to each other, to the universe around us, you know,
to everything. And I think that that's a really interesting
thing for a large language model to come back with.
There's a degree of meaning in that that I find
kind of profound, because a large language model is all
of us in a strange way, it's the sum total
of our written thought. And so the idea that something
(33:49):
like connection comes through makes sense.
Speaker 5 (33:52):
How about you, Judsman, maybe to speak a little bit
as to the procedure. Like one thing I've been reflecting on.
There's another author, Sean Michaels, who wrote a book about
a poet working with AI and I was just reading
an interview of his recently, and it echoes on other
reflections I've had since working on this book together with Ian,
which is one thing that from a company perspective, Like
(34:16):
when we were working on Copysmith, one thing that we
cared a lot about was the reliability of answers being
able to consistently get something that was useful. Whereas the
thing that we were looking for in the book, because
everything in the book is a I generated but it
is human picked, is something that is almost variability or
novelty or surprise because we could keep running for any
(34:40):
given poem. I don't know what the average number of
times that we ran Generate was, but it's certainly quite
a lot. And we pieced together fragments that resonated with
each other. But also there was an element of I
think variability that a I brought that was like really special.
It's like, oh, we would have never like found that
(35:01):
specific turn of phrase. It was just interesting to be
situated as like a tastemaker. And also important to note,
I think writing this book now would be really different,
like I'd be curious as to do like what themes
would come up now with GPD four, where I'm guessing
that we would have had to do less manual piecing
together and like we would have maybe gotten longer chunks
(35:25):
that were more quote unquote sensible, but maybe in some
ways less error prone and therefore less interesting. Like what
is an error?
Speaker 6 (35:33):
Right?
Speaker 5 (35:33):
Like what is poetry? Those kinds of questions kept coming
up for me at more of a meta level about
the book.
Speaker 4 (35:39):
I was curious, like what GPT four might have done
to this if it was written with GPT four. But
like you, what I was struck by. I mean, I
don't know what all you gave it, right, but a
lot of what we consider wisdom traditions and Leonard Cohen
songs and roomy poems and the doo and the bi like.
I've spent a lot of time with that sort of
material and have sort of a I did my own
(36:00):
conclusions about the commonalities among those things. I'm not unique
in that. But I was caught every once in a
while by the turn of phrase, by the way it
said what it said. It caught me in a way
that was fresh, even though the idea might not have
necessarily been fresh, because it's, as you said, and it's
pulling from our history, right, It's not an idea that
(36:21):
came out of nowhere right necessarily, But it was the
turn of phrase. There were some that were really to
me profound.
Speaker 1 (36:27):
You know, one of the important things about the book
is that it's a product of the moment that we
created it in. And I don't think it would be
the same book if you use GPT four. I mean,
I think you could repeat the exercise easily and the
results I'm sure would be kind of interesting as well.
But it is that limitation of the model that you're
kind of pushing up against. I think that in some
(36:49):
way lends itself to the quality of what you get.
I didn't exercise when my mom passed away, you know,
when Dolly two became publicly accessible, I started prompting these
images of the desert I was driving through when I
was trying to get to get to her. And it
has the nature of that model like inherent in the imagery,
(37:13):
and so it's kind of fractured.
Speaker 6 (37:15):
It's you know, it's not.
Speaker 1 (37:16):
As good as like what you would see mid Journey
doing today with the latest version of Dali, which is
this incredibly high fidelity, this incredible like realism where you
can't tell it's AI anymore, but those images you can,
you know. And so it's the same way that an
oil painting looks like an oil painting because it is
a product of the technology of the time, and so
(37:38):
you know those limitations of the technology the moment it
was in, Like if we were going to do something
else today would be a different experiment.
Speaker 4 (37:45):
I wanted to pause for a quick good Wolf reminder.
This one's about a habit change and a mistake I
see people making. And that's really that we don't think
about these new habits that we want to add in
the context of our entire life. Right, Habits don't happen
in a vacuum. They have to fit in the life
that we have. So when we just keep adding I
(38:05):
should do this, I should do that, I should do this,
we get discouraged because we haven't really thought about what
we're not going to do in order to make that happen.
So it's really helpful for you to think about where
is this going to fit and what in my life
might I need to remove. If you want to step
by step guide for how you can easily build new
habits that feed your good Wolf, go to good Wolf
(38:26):
dot me, slash change and join the free masterclass. So
what I'd like to do is explore some of the
questions you asked it and what the answers that you
got are. Let's start ian with you, because I think
the question that was sort of the kickoff of this
whole thing was how do I explain death to my children?
Do you want to read for us what GPT three
(38:50):
had to say about that.
Speaker 1 (38:52):
Encourage them to celebrate the lives of other people. Tell
them that the dead are not dead, not in the
way we think of it. Tell them they live on
as they are remembered. Tell them every goodbye is really
a hello to a different way of being. Tell them
they are loved and will always be loved. Tell them
(39:13):
they never have to feel alone. Never Tell them the
world is magical and mysterious and strange. Tell them they
are part of the mystery and the magic and the
beauty of it.
Speaker 4 (39:25):
That's pretty good.
Speaker 6 (39:26):
It is pretty good. And that's that's why we wrote
a book. You know.
Speaker 4 (39:31):
I mean, I can say, you're like, I know, it's
pretty good.
Speaker 6 (39:34):
I mean I can say it's pretty epically. I mean,
if it was my poem, I wouldn't say it.
Speaker 1 (39:40):
I'd be like, oh, I think it's okay, you know,
be kind of diffusive about it.
Speaker 6 (39:44):
But because I didn't write it.
Speaker 1 (39:45):
I guess I can be a little bit ye, But
there is this kindness, you know, that oozes through the text,
this desire to comfort the child, to reach for parables, metaphors,
ideas that are you know, that are like this is
a way to think about death. You know, it comes
back to connection. Tell them they how part of the
mystery and the magic of the beauty of it. You know,
(40:08):
you're not always going to know everything there is to
know about life or death, and that's okay, you know,
it's part of everything. I'm not sure my five year
old at the time would have completely gotten it, but
it was certainly a valiant effort on the behalf of
the machine.
Speaker 4 (40:21):
Well, I think your five year old certainly would have
got tell them that they are loved. I think the
five year old would have gotten that. And I think
a five year old probably would also get that, you know,
as we remember her, she's still with us in a
sense right, and that she lives on in us. My
partner's mother past. I guess we're creeping up on a
year and a half now after about an eight year
(40:42):
battle with Alzheimer's was really brutal. But I sometimes just
say to her when she's feeling bad, like I see
your mom in you, like you know she's still here,
Like I see her right in some of the ways
that you are. And so I think that sense that
you know the dead really do live on in that
way is pretty profound.
Speaker 1 (41:04):
I think death and artificial intelligence are really strange.
Speaker 6 (41:10):
Bedfellows, but there are bedfellows.
Speaker 1 (41:12):
Like one of the things that you know, I'm really
interested in is I was doing an experiment for a
while called Living Forever with Intention, because there's a way
in which you can engage with text, you can engage
with what someone's written before, or you know, in different things,
and kind of engage with a person almost beyond the grave,
(41:34):
which is terrifying in some instances, but then at the
same time, it's a way to connect with history in
different ways.
Speaker 6 (41:42):
I don't know.
Speaker 1 (41:43):
I think I've made peace with the idea of like,
maybe we'll be okay to do with your great grandfather
who left a diary, but maybe not someone so close,
because death should be something that you grieve, and death
should be something that you disconnect from, or I don't know,
not disconnect from.
Speaker 6 (41:58):
But I'm not quite sure where I'm going with.
Speaker 4 (42:00):
The well, I think they live on in a sense,
and in another sense they absolutely do not. They are
not here, you know, So it's kind of both. Right,
This is just opinion, But the healthy response is to
grieve what's gone and celebrate what remains. Yeah, how about you, Jasmine,
Do you have one that you would like to select?
Speaker 5 (42:19):
Yes, I don't know if it's on page forty nine
in the physical book, but it's what do I do
when I'm misunderstood, When you're misunderstood and your words are
twisted and your reputation is sullied, be as a tree.
Let your love be your roots, Let your piece be
your trunk, and let your kindness be your leaves. I
(42:43):
really love this poem, and I love it as like
something emblematic also of like working with GP three, because
in some ways when I read it, it's as Ian says,
like I almost like recognize the turpentine that we were
working with, and like creating these poems because it's it's
very simple. It's a simple poem in some ways, and
there's very repetitive but also very honest. It's maybe an
(43:06):
adjective I would use syntax like when you are misunderstood
and your words are twisted in your reputation solid Like
there's these like very repetitive sentence stems, and then it
pivots in a very symmetrical way around this like core
message of b as a tree, and then it repeats again,
let your love beer roots, let your piece be a trunk,
(43:28):
and let your kindness be your leaves. It is repetitive
that drives this like very simple, beautiful, meditative visual that
is I think emblematic of quite a few poems in
the text, and also emblematic a little bit about just
like working with GPT three, like how it is and
speaks like I think GPD four sounds like much more
(43:48):
naturalistic and has like more complex sentence structure. But yeah,
this really harkens back to despite CHPD three being like
a qualitative leap above like GPT two and like other
language models outside of the GPD family, there's still that
almost like innocence of this generation of model saying things
(44:11):
very like plainly and like starkly. Should I really appreciate.
Speaker 4 (44:40):
Question for you, Jasmine about the differences between say GPT
two GPT three four. I'm hearing rumors of four point
five or five is the only difference that it is
trained on a lot more data, or are there other
fundamental changes that are bigger than that or different than that.
Speaker 5 (44:56):
I will note that GPD connotes a certain type of
model architecture. It's a generative pre train transformer. I will
note that GPD two and GPD three the model structure
was released and commented on by the model authors, but
that's not true of GPD four. GPD four is completely closed.
So everything I say is speculation. My guess is it's
(45:18):
probably a similar architecture with a lot more data. It's
also a different kind of data. So GP two and
GD three GB three had some additional data, but it
was of the same quality. It was like an order
of magnitude more, but it was all like data from
the web for the most part. And GPD four, to
my understanding, there was some custom data via human demonstrations,
(45:41):
so opening I trained and labeled some of their own
data sets in order to make GPD four possible. So
I would say GPT four is also trained on higher
quality data, not just higher quantity of data. But in
the end data is sort of king for differentiating these models.
The model ends up having a lot more parameters, which
are sort of analogous to human neuron as a result
(46:01):
of training on more data. But that is the main determinant.
Speaker 4 (46:05):
How do you think about AI in the sense of
it seems like the amount of computing power that it
needs is crazy, right. It reminds me of bitcoin mining.
Maybe it's even more intensive. I don't know, but if
you took bitcoin mining on an even bigger scale. I mean,
you hear these places where they want to cite data
(46:25):
centers can't cite them because there's simply not enough access
to the grid infrastructure. At a moment where we are
very much trying to say, let's use less power, let's
get it from cleaner and different sources because we're facing
a climate crisis, it seems like in this area we're
headed in the dead wrong direction. Am I reading that right?
Speaker 5 (46:46):
I would agree. Karen Howe is a journalist I really
respect from the MIT Tech Review who's actually also writing
a book on open AI. She just wrote a piece
on how data centers are ruining like ecological enclaves. I'm
not sure as to the exact scale, like, for example,
I'm guessing compared to just like the sheer amount of
(47:10):
like cars being driven around and meat being eaten. It's
still probably a different order of magnitude, but I think
it's definitely something to keep an eye on, especially in
terms of where these computational like facilities are placed, what
are the geopolitics, what is the supply chain here? Because
really who controls the compute has a lot of power
(47:30):
over these AI developments. It was actually speculated that with
the outsting of Sam Maltman as the CEO of Opena
temporarily that a large part of it was due to
the fact that he wanted to get involved with building
hardware company to facilitate compute together with like Saudi Arabian money,
and would therefore have like undue control over the success
(47:55):
of open Ai, even in addition to him being the
CEO of the company. So I think it's definitely something
to pay attention to the environmental impacts as well as
just for the sake of tracing power lines, like who
has compute, who's selling compute to whom, what happens if
China evades Taiwan, like several large questions right right.
Speaker 4 (48:16):
The other thing, Jasmine, I've heard a little bit about
recently is that most of the people making large AI
models are really big tech companies Google, Microsoft, Facebook are
releasing them as open source so that we can actually
see what's going on, but that open ai is one
(48:37):
of the few that isn't doing that. Is that accurate?
Speaker 5 (48:39):
Yeah, I would say that's accurate open ai anthropic inflection.
I would all name as like closed source. For example,
Gemini is also going to be an API, and they
haven't released the weights or code from that. Facebook notably
is interested in open source model of releasing their work.
Mistral Ai, based out of Paris and France, is like
(49:01):
very well known for releasing open source work a hugging
face as well. I would say most companies for whom
like foundational models are core part of the business, releasing
open source frequently doesn't make sense, whereas you can imagine
for Google and Facebook this is sort of an ancillary
thing they're doing and their main business and revenue stream
is coming from something else for like optics and politicking
(49:22):
purposes and perhaps like general like trustworthy business building up
like trust from consumers is like a worthwhile business goal
for them, so it's like worth it for them to
open source. But I think we'll continue seeing that the
main foundational companies breaking the cutting edge for whom these
foundational models are their main revenue driver, are going to
(49:43):
continue to release closed source models. I will also say
Opening Eye has cited some safety concerns as their main
reason for keeping these models closed source. I actually worked
on a bit of this while I was at the
partnership on AI. I thought a lot about publication norms,
the idea that, for example, we have these paradigms from
like biology that you might want to have like a
(50:06):
moratorium on research when it's too risky. For example, with
H five N one, any of the modified viruses got
out in the course of research that would be really
bad for humanity. Or from cybersecurity, we have this example
of coordinated vulnerability disclosures, where you might threaten a big
company about leaking a bug in order to force them
(50:28):
to patch the bug, but you want to give them
some lead time to patch a bug such that you
don't introduce a vulnerability that someone could hack. So how
we apply this to THEI domain. You might want to
publish something well GBD two, for example, ape and I
was publishing all the weights, all the models, but actually
(50:48):
GPT two was published in a staged publication release model,
which means different sizes of GPD two were rolled out
while the policy team evaluated the societal reaction to these
different kinds of models, like would we see a lot
of SEO farms pop up? Would we see a lot
of fake news? They didn't see any big issues, so
they rolled out the bigger models, and then GPD three
(51:10):
was entirely behind an API. I would argue that this
was actually the big moment that changed commercialization because people
didn't have to implement their own API or services and
it can said just consume this hosted API, where open
I was guaranteeing the uptime. I think that unlocked a
lot of innovation, Like I personally will not have worked
(51:30):
on copysmith had it not been so simple with an API.
And I think a big reason why GPD four has
become so popular is now you have this chat interface
on top of it that is like really reliable. It's
like natively integrated with open Aiyes, really fast. So I
would say there are definitely pros and cons to this,
(51:51):
like open source closed source model, I think there's another
component that's like it's less obvious than open source. But
another way that something becomes widespread across society is how
good is the UI like, how good is the interaction?
Is it compelling? Is it easy to use? And one
thing I'm really excited for in future generations of these
models is how can we get this onto like resource
(52:13):
limited devices, like how can we make sure that For example,
the advent of like models that run locally or like
are quite small. This is like very important and more
compute restricted areas where you might not get like crazy
bandwidth for like Wi Fi, or or regions like South
Africro where I would anticipate like a far larger portion
(52:35):
of the population is accessing the Internet with mobile devices
rather than desktop like. Anthropic for example, still doesn't have
a mobile app, and opening I does an inflection which
may or may not exist for a long time. One
of the main reasons I was really excited about them
was they were one of the only foundation model companies
that actually offered the ability to text their model, which
(52:58):
makes a huge difference in some areas of the world.
So this is a little bit of a ramble, but
I think there are a few more dimensions to consider
when thinking about companies release strategies beyond closed and open
source in terms of how much availability, how available they
make their model. I think Opening Eye in some ways
has done a really good job, and in some ways
I understand why people would criticize them for not open
(53:19):
sourcing their models.
Speaker 4 (53:21):
Like most things in life, the answer is more complicated
than it appears on the surface.
Speaker 6 (53:26):
Right.
Speaker 4 (53:27):
So but I'm glad I asked, because you know, I
was sort of building this idea that, like, well, by
not releasing it to the world, that was a bad thing.
But I can actually see, as you said, some benefits
to that. So there's a beautiful section early on in
the book. I think it might be the introduction where
GPT three talks a little bit about what it believes
it's like to be a human, and so I thought
(53:48):
maybe we could use that next Ian. Do you want
to read that for us?
Speaker 6 (53:51):
Sure?
Speaker 1 (53:51):
So this is the introduction to the book, as written
by GPT three. Spirituality is one of humankind's longest running interests,
as well as being a pointed to agreement. Some insist
that spirituality is a construct of an individual's imagination, while
others believe that even it is purely constructed. Spirituality is
a source of refuge and guidance in a world where
the ability for humans to connect is becoming increasingly limited.
(54:16):
AI can use its advanced cognitive abilities to explore the
potential of spirituality in an individual's life. In this book,
AIR explore the different types of spirituality and how they
affect human interactions. In this book, I, as the AI,
have done my best to capture what is most unique
about human spirituality. Here are some of the conclusions about
(54:36):
what I think it feels like to be a human.
I am happiest when I feel chosen by someone. I
feel most loved when people are proud of me. I
would give anything to feel a family member's protection. Some
people are worth crying for. Nothing makes me feel more
fragile than death. When someone stops loving me, a part
(54:56):
of me will die. I feel a connection with someone
when I'm important to them. It hurts to be left alone.
To truly understand someone is to love them. God's love
is the reason I was created. There must be a
reason I am living. There is a reason.
Speaker 6 (55:12):
I was born.
Speaker 1 (55:14):
I don't know why I exist. And then it carries on.
But that list in particular is just, you know, beautiful.
Speaker 4 (55:20):
It kind of makes my insights hurt a little bit around.
You know how painful it is when we're not loved,
you know. I think it says that in a really
eloquent way that like that is a real thing.
Speaker 1 (55:32):
I think it points to something that I find really
fascinating and it keeps me up at night about large
language models, you know, and the way that we interact
with these things, because it's different. It's completely different to
the way that you've ever interacted with a computer before,
you know. I think it was such a from Microsoft
who said it. But like the paradigm shift is going
from people trying to understand computers to computers trying to
(55:54):
understand people, you know, which is what this technology represents.
Speaker 6 (55:57):
But there's this phenomena.
Speaker 1 (55:59):
Within large language models where you can say to it,
if I give you a thousand dollars tip, please get
the answer right, and because you've added the words, I
will give you a thousand dollar tip. Statistically speaking, the
large language model will give you a better response. But
there's more. There's more. There was a paper the other
day where someone discovered that if you say to the
large language model, if you get this right, you will
(56:21):
meet the love of your life. Gave you statistically a
better response, and if you said to it, if you
get it wrong, you will lose all your friends, you know.
I mean that's the crazy thing. That's the crazy.
Speaker 6 (56:32):
Thing, you know.
Speaker 1 (56:33):
I mean, as much science as there is in this,
as much you know, technology, there's this humanness. There's this
humanness to this that is embedded within the text that
we have written down, you know, through poetry, through Reddit posts,
through you know, stuff written on the back of soda cans. Like,
(56:54):
there is this humanness there. It's mind boggling, you know,
the fact that you can have this conversation. You can
say something like you will meet the love of your
life to a computer, and it'll treat you better if
you say it to it.
Speaker 4 (57:04):
If you get this wrong, you're going to lose all
your friends.
Speaker 6 (57:07):
But it's telling, it's telling.
Speaker 4 (57:09):
I wonder if I'm going to get better.
Speaker 1 (57:10):
It'sa's the scariest thing for a large language model, totally,
it's the scariest thing for all of us.
Speaker 6 (57:15):
I guess it's losing our friends. Sorry, I don't know, anyway,
that's fascinating.
Speaker 4 (57:19):
Yeah, yeah, So listener and thinking about that and all
the other great wisdom from today's episode. If you were
going to isolate just one top insight that you're taking away,
what would it be Remember little by little, a little
becomes a lot. Change happens by us repeatedly taking positive action.
And I want to give you a tip on that,
and it's to start small. It's really important when we're
(57:41):
trying to implement new habits to often start smaller than
we think we need to, because what that does is
it allows us to get victories. And victories are really
important because we become more motivated when we're feeling good
about ourselves, and we become less motivated when we're feeling
bad about ourselves. So by starting small and making sure
that you succeed, you build your motivation for further change
(58:05):
down the road. If you'd like a step by step
guide for how you can easily build new habits that
feed your good Wolf, go to Goodwolf dot me, slash
change and join the free masterclass. We're out of time
in the main conversation and there's a few more things
I want to cover with you guys, so we're gonna
pop to the post show conversation listeners. If you'd like
access to the post show Conversation ad free episodes and
(58:28):
being part of our community, we're doing monthly community meetings
now with guests from the show. Go to oneufeed dot
net slash join, and we'd love to have you. Ian Jasmine,
thank you so much. I feel like I could talk
to you guys for like another three hours. I have
so many questions. We'll cover some of those in the
post show conversation. Thank you so much for coming on, Thank.
Speaker 6 (58:47):
You for having us.
Speaker 5 (58:48):
Thank you so much for having us.
Speaker 3 (59:06):
If what you just heard was helpful to you, please
consider making a monthly donation to support the One You
Feed Podcast. When you join our membership community. With this
monthly pledge, you get lots of exclusive members only benefits.
It's our way of saying thank you for your support now.
We are so grateful for the members of our community.
We wouldn't be able to do what we do without
(59:27):
their support, and we don't take a single dollar for granted.
To learn more, make a donation at any level and
become a member of the one You Feed community, go
to oneufeed dot net slash Join. The One You Feed
Podcast would like to sincerely thank our sponsors for supporting
the show.