Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Chad Woodford (00:00):
This week, I'm
sharing an episode of a podcast
(00:02):
called hit of happiness, where Ispoke to Brian about artificial
intelligence, and consciousnessamong many other things. This
may have been my favoriteinterview so far, just because I
love talking to Brian, but alsobecause we explored so many
topics in such a coherentfashion. And by the way, I'm
recording this right now in themiddle of the hurricane in Los
(00:22):
Angeles. So you might hear alittle bit of rain sound in the
background, but yeah, littleambiance. Anyways, this
interview is also a great setupfor this three part series that
I'm working on, about the mythof artificial intelligence, the
nature of consciousness,transhumanism, Spiritual
Machines, and metaphor. As mylongtime listeners know, I like
(00:45):
to do these deep dive episodesonce in a while. So it's taking
a while to do all the researchand writing so I can finally
record it, but I think it willbe worth the wait. A few
highlights from my conversationwith Brian today include the
fact that AI has pros and cons.
It can automate monotonous workand allow more creativity but
also poses risks like jobdisplacement, that require
(01:06):
broader societal solutions. Alsoviews on whether AI can achieve
phenomenological consciousnessvery. And in the episode in the
interview with Brian I offerseveral reasons why this might
be potentially an intractablechallenge. And as I will be
talking about in my upcomingthree part series on the
podcast, fears do exist about anuncontrolled kind of run
(01:27):
rampant, super intelligent AI.
But I think these capabilitiesare often overstated. Ai still
relies on training data and doeslack common sense. And there's
so many other reasons that weprobably won't get there anytime
soon. And finally, the futureremains uncertain around AI. But
there are reasons I think foroptimism. If we can mindfully
(01:49):
guide the progress of thesesystems and support, it's
complementing our humanstrengths rather than replacing
humans altogether. I think thekey is maintaining humanity's
agency and emphasizinghumanity's potential, alongside
increasingly capable AI systems.
So that's just a little bit of apreview of the conversation I
had with Brian. So here we go.
(02:11):
This is my interview with Brian,I think you're gonna really
enjoy it.
Brian Dubow (02:24):
Hello, fellow
happiness seekers. Welcome back
to the hit of happiness podcastall about helping you reframe
your reality, spread positivityand transcend your perceived
limits. I met today's guests ata dinner party in LA a few weeks
ago. And I felt like I couldhave spent days picking his
brain. He has over 25 years ofexperience in software
(02:44):
engineering, law, productmanagement, AI research, and
yoga and meditation training,and is now dubbed an artificial
intelligence philosopher. At atime where everyday humans are
finding ways to leverage chatGBT and other AI products on a
daily basis. Today's guest ishere to help us understand some
of the ethical and philosophicalimplications of artificial
(03:07):
intelligence and machinelearning in our lives. While I'm
personally still a bit hesitantto use artificial intelligence
to do my everyday tasks, or headof happiness, I'm hoping this
conversation with Chad willconvince me otherwise. So with
that, Chad Woodford Welcome tothe hit of happiness podcast.
Chad Woodford (03:24):
Thank you, Brian.
It's so good to be here.
Unknown (03:27):
It's awesome to have
you chat. And I'm very excited
for this conversation. Before wedive in, can you just give our
audience a bit of a? Who areyou? Where are you from? What do
you do?
Chad Woodford (03:37):
Yeah, sure. So
it's kind of a long answer, but
I'll try to keep it short.
Unknown (03:41):
Give us the long run
over here. Sure. Wherever that
right.
So I'm originally from upstateNew York. And I've lived all
over I've lived in Vermont, andAtlanta, Boulder, San Francisco,
India. So now I'm here in LA.
And yeah, I'd like to movearound, I guess. I'm currently
in grad school for philosophy,cosmology and consciousness at
the California Institute ofintegral studies in San
(04:05):
Francisco. But I've also been,like you said, an AI engineer,
technology lawyer, productmanager, I was actually a
filmmaker. At one point, I wentto film school. So I just I like
school. I like learning things.
And yeah, so I'm focused on kindof bringing together a lot of my
experiences, which includes AI,philosophy, and spirituality
(04:27):
coming from the Easternperspective, too. And then yeah,
just the technology backgroundas well.
That's fascinating. You justlisted like eight different
topics that I want to doubleclick on. And I don't know if we
have time to double click on allof them. But I think that one
thing that I connected with youover was how we both really
started in these corporatebackgrounds. I was a consultant,
(04:47):
you were a lawyer. I'd love tohear your track from you know,
going to law school becoming alawyer to now studying
philosophy and how that allhappened in the first place.
Yeah,
Chad Woodford (04:58):
this gets into I
mean, I think this is so
relevant. For your topic to forhappiness, because it gets into
these questions about sort ofdreams and happiness and
conditioning and all that. So myjourney was, like where to
start, right. So I think,looking back, I made a lot of
decisions in my life based on, Iguess you can call it fear or
(05:20):
practicality. And I was tryingto find a way to fit my dreams
and passions into a career thatcould make money. I grew up very
working class. So I had thissort of like conditioning, that
you have to play it safe and dothings that are practical,
career wise. And so I became anengineer, because I was good at
math and science. And it's safe,it's practical, it makes money.
(05:42):
But I did have this far, I'vealways had this spark of wanting
to have an impact and to changethe world in some way or to help
people in some way. And so theidea behind law school was to
get into some kind of likeconsumer rights work, or some
kind of impact, where some kindof like policy work, something
like that. And also, I wanted tobe a writer for most of my life,
(06:04):
I had this dream of being awriter. And so for me with the
practical mindset, thecompromise was being a lawyer,
where you do actually do a lotof writing. But then of course,
you make a lot of money. So, soyeah, that was the motivation
behind that whole thing. And soI was a corporate lawyer, but I
was doing some of that consumerwork on the side, doing some pro
bono and trying to find a way tokind of get into more of the
(06:28):
policy work. And so in thatprocess, and I think maybe you
have a similar experience, I wasreally stressed out, being a
lawyer is very stressful in thecorporate world, especially, I
was doing like a lot of largetransactions and working in
startups in Silicon Valley. Andit was just kind of really
draining. And so for me, Istarted to explore other things.
(06:49):
And I started doing yoga andstarted really looking more
into, like, inner questions orbigger questions or spiritual
questions, that kind of thing?
Unknown (06:58):
Sure. And I bet when
you were, so these inner
questions that have popped upwhile you were at the law firm,
right, and was the next stepstraight to India to answer
those questions are part of thatwork?
Chad Woodford (07:09):
Yes, and no, I
mean, the way I apparently do
things is I sort of stick a toein the water, but then I keep
one foot on land. And then Ikeep kind of doing that back and
forth until like, finally committo something new. So it's, it's
a process for me. So I did ateacher training while I was
still a lawyer. So I wasteaching on the side, and I was
(07:30):
I had made a law practice for awhile. And that was the way I
was finding the kind of like,satisfying the interest while
keeping one foot on thepractical side. But then I was
fortunate enough to work atTwitter in the early days as a
lawyer. And that was then gaveme some financial a financial
windfall, which allowed me totake some time off. So then I
started to write a novel aboutspiritual crisis and all these
(07:53):
kind of big questions, and didthat full time. And so that was
kind of the thing, that readingthe novel was me kind of opening
up portal to step through,because it gave me the excuse to
research a bunch of things I wasinterested in, and to have some
experiential research, to dothings that I wanted to do. But
I guess I'm the kind of personwho can't just go out and do the
(08:14):
thing, I have to write a book,to give me an excuse to do the
research to do the thing. Sothat included go into Burning
Man, and then eventually, evenplant medicine. I did Ayahuasca
initially, because I had theidea to have my main character
do Ayahuasca. So all that thoseexperiences of going to Burning
Man and getting more into goingto Peru, and all that is what
(08:37):
then kind of led to going reallydeep into the spiritual path?
Yeah.
Unknown (08:42):
Wow. And I think it's
really interesting the way you
put that, because I thinkthere's some people that can
separate their professionallives and their spiritual baths.
And they're able to say, youknow, I work this nine to five
or 97. And then I can bespiritual the rest of my life, I
can still go to Burning Man, Ican still do whatever. There's
other people who feel the needfor their entire life to be
(09:05):
their spiritual journey, andfigure out how to make money on
your spiritual journey. And itsounds like you constantly kind
of just waved back and forth andinterweave the two until slowly
but surely, you've gotten closerand closer to your life being
the spiritual path. What areyour thoughts on that?
Yeah, I think that's right. Ithink that's right. I mean, it's
interesting to frame it as apositive because I think
(09:26):
sometimes I think of it as likeme, maybe not having the courage
to go just fully into the otherdirection. But at the same time,
they do say that you shouldn'tmake your passion, your
vocation, because then you'llcome to hate it or something
like that. I don't know if Ibelieve that, but that's a thing
they say.
Yeah, so I mean, you went onthis spiritual journey. You did
Ayahuasca you went to BurningMan. Then eventually you went to
(09:48):
India? What were your majortakeaways from these types of
experiences? That you know,everyone can get something out
of it? Everyone can learn from?
Chad Woodford (09:58):
Yeah, well, what
are we I have to say about that
actually, just to go back toyour last question was, yeah,
for me, it's part of a longertheme. So like, as I was saying
before, the part of themotivation for law school was
wanting to make a difference.
And I kind of learned as I waspondering this question and
starting to try to do thingswith law in that direction, I
started to have this realizationthat this Law Professor Lessig,
(10:21):
Lawrence Lessig had, which isyou have to keep sort of moving
the lever up, in a sense. So hestarted off doing copyright
reform, because he waspassionate about trying to help
people have access to art, and,you know, fair use and all that.
And then he realized that thereal challenge with trying to
change copyright policy wasactually the politics is broken
(10:42):
in DC. And so then he decided topivot and address the lobbying
and just the ways that politicsis broken, and DC, because if
you don't address that, then youcan't really change the way laws
are made and how they affectconsumers. And so I was inspired
by that. But then I took it tothe next level, realizing that
you actually can't really changethings, fundamentally, unless
(11:06):
you change the way people think.
And you change the kind ofshared worldview that we have.
So that's ultimately how I gotto philosophy. But yeah, so I
think, for me, the biggest way,the most effective way to make a
difference in the world wasn'tlaw. At the end of the day, it
was actually like philosophy inthe sense of recognizing that
(11:29):
everyone believes in hismaterialist worldview, and if we
can just shift that, and changethe way they think, then
there'll be open to, you know,more mundane things like policy
change and different kinds ofefforts in that direction. So
anyways, I just wanted to kindof tie that together.
Unknown (11:44):
Yeah, no, thank you for
bringing that in. Just double
click on that the materialistworldview, they're saying,
because politics are so centeredaround financial implications.
And I guess the way companieswill support campaigns that to
lead to things that might not beethical, or might not be the
(12:05):
best for the world, that'sstopping the love and light of
the world. Is that what you'regetting at?
Chad Woodford (12:11):
Yeah, kind of.
Yeah, it's like, so when I saythe materialist worldview, what
that means is this idea thatgoes back to kind of back to
like Descartes, or to thescientific revolution, which is
that everything is matter. Andbecause of that, there's nothing
higher, there's no sort of like,mystical realm or anything like
that. There's just matter andjust sort of deterministic
(12:33):
mechanistic reality, which iskind of meaningless and random.
And everything we experience isjust a fluke and all that that's
the materialist worldview. And Ithink it runs deep, because it's
kind of depressing to believethat you know, a lot of people,
whether you are an atheist ornot, I think a lot of people
just don't have any meaning toderive from that or don't know
(12:56):
where to find meaning in theirlives because of that. And so,
yeah, the way that that can playout or multivariate, but in
terms of your question aboutpolicy, for example, the way
that can play out is thatbecause we don't consider
nature, for example, to beanything but a bunch of like raw
resources for us to use to ouradvantage, then we don't make
policy that preserves thatbecause there's not a sacred
(13:19):
element to the world. We don'tapproach it that way. Does that
make sense?
Unknown (13:24):
That does make sense.
So it sounds like you found thispassion? How did you start
making an impact? And at whatpoint did that come into play?
Chad Woodford (13:33):
Yeah, I mean,
it's still in progress. You
know, I think a large part, alarge part of me being a yoga
teacher is that I, my experiencewith the the yoga that I studied
in India, was that it expandedmy consciousness. And it changed
the way I think, and I wanted toshare that. And I do share that
because I think it's soimportant to help people and
give them practices that expandconsciousness that give them the
(13:56):
direct experience of unity,which is what yoga is all about.
And through that process, youknow, you start to tap into your
inherent bliss nature, which isa form of happiness. So that's
one way I've been doing it byteaching yoga and offering
different things in that space.
But then in terms of shiftingworldviews more broadly. I'm in
school for philosophy, cosmologyand consciousness, because I
(14:17):
want to find other ways of doingthat. And I think that actually,
and we'll get into this when weget into the AI, I think the AI
revolution that's happening, isgoing to force us to reckon with
our worldview and could actuallybe a vector for us to, like,
shift the worldview and to kindof give humans more important
(14:38):
role in the world.
Unknown (14:43):
I love that and just a
level set, you know, you kept on
using the word expandconsciousness, you know, just to
make sure everyone understandswhat it means to expand
consciousness and you know, somepeople have watched that we
worked documentary where theytalked about elevating the
world's consciousness, but whatdo you really mean when you say,
you know, you Eat yoga to expandconsciousness and you are on a
journey to expand yourunconsciousness.
Chad Woodford (15:04):
Yeah, so we can
talk about what consciousness is
first. And I can answer that.
Yeah. Because it's relevant toAI too. I think a lot of the AI
conversation is happening aroundconsciousness. So yeah, so from
a yogic standpoint,consciousness is fundamental,
which is to say that everythingis consciousness, and
consciousness is. Soconsciousness is like the
fundamental substrate ofreality. And the consciousness
(15:25):
then is as a form of play, and aform of love, actually, in the
yoga world view, it's creatingthe world. With that intention,
it's treating the world as aform of play. And we're all
expressions of that. And sowe're all kind of small pieces
of this larger consciousness.
(15:46):
It's if you put it in a westernphilosophical framework, it's
idealism. That's the philosophy.
So when you say expandconsciousness, what that means
is like, kind of the first step,and this is the first step
people often experience whenthey practice yoga, is you dis
identify with the mind, becausethe mind is just one small part
of you. I think, most people inthe world today are primarily
(16:07):
identified with their mind. Andso the first step is that you
start to realize like your, youas yourself, are so much bigger
and broader than the mind,you're actually this
consciousness. So we're movingout of identifying with the mind
expanding into thisidentification with
consciousness. And then the yogajourney is starting to have
(16:27):
experiences of, yeah, it's hardto talk about with words. But
yeah, it's like, one part of itis you start to let go of
conditioning. So conditioning isthe stuff you get from society,
and from your parents, and from,you know, peers and colleagues,
that is maybe sort of like nottotally true or not helpful to
(16:48):
you or helpful to society. Theseare just kind of like wrong
notions, you know, small ideas,that kind of thing. So you're
getting rid of the conditioning.
And that through that process,you're starting to become more
and more sort of free, in asense, more liberated. And as
you go through that process,you're starting to expand your
(17:09):
consciousness to include moreand more of your bigger self and
bringing in more and more ofthis unity experience. I don't
know if that makes sense. Butthat's the basic process. Got
it?
Unknown (17:23):
And for those who are
new to this concept, what would
you say is the first step theyshould take? Is it go to a yoga
class?
Chad Woodford (17:30):
Yeah, I mean, I
think that could be helpful. You
know, it's interesting, becausein the West, we primarily
identify yoga as stretching. Andthere's so much more to it than
that, but at the same time,because we're so disembodied,
like, in the sense that we're sonot in our bodies, we're so kind
of living in our heads all thetime. I think part of the reason
(17:51):
that that has been so helpfuland is such a good access point
is that we should start with thebody. So if you go to a yoga
class, you know, it's calledAsana, that's the stretching
part of the yoga practice. Ifyou go to an Asana class, that's
going to help you to get intoyour body. And then there are
you know, that practice alone isso good at putting you into a
(18:14):
meditative state through thebody. And you can have these
experiences in that practice ofunity consciousness of expanding
consciousness, that kind ofthing. And then once you've done
that for a little while, thenyes, you can start to explore
other parts of the yoga practiceor other modalities beyond yoga
to the parts of yoga, thatreally helped me especially
(18:35):
we're trying to Yama, so workingwith the breath. And that can be
breath work. Also, these thingscalled Trias, which are the most
effective and powerful forreleasing conditioning and
expanding your consciousness andthat kind of thing. So three is
are primarily associated in thewest with Kundalini Yoga, but
they're actually a much broadertradition and a much older
(18:56):
tradition than that. And the thekinds of prayers that I studied
are from another tradition inIndia. But yeah, so there's
Korea, there's meditation,there's a lot of Montero you can
do that really helps with this.
There's the sacred rituals.
There's all kinds of differentaspects to the practice. Yeah.
Unknown (19:13):
Got it. Thank you. So
that makes sense. And, you know,
I'm glad we kind of level set iton consciousness and what it is
for humans, because I think onedirection I want to also take
this as the consciousness of AIand what that looks like. So why
don't we kind of now shiftforward and first going to find
what AI is for all ourlisteners, and then we'll kind
(19:35):
of dive into the meat of this.
Chad Woodford (19:36):
Yeah. So this is
a good, good question. Because,
in a way, it's hard to define,actually. So I'll come at it
from a couple differentdirections. First of all, I
think at the most basic level,it's just making computers more
like think more like people orbehave more like people. And
that can be like anything from Idon't know learn how to solve
(19:58):
problems or are identifyingpatterns. These are the kinds of
things that it can be. Butwhat's interesting about AI is
that we've been throwing thisphrase around for decades. And
it seems to be a moving targetin a way. So there's this great
quote from Pamela McCormick, whosays that AI suffers the
perennial fate of losing claimto its acquisitions, which
(20:20):
eventually, and inevitably getspulled inside the frontier, a
repeating pattern known as theAI effect, or the odd paradox.
Ai brings in a new technology,people become accustomed to this
technology, it stops becoming AIat that point, and a newer
technology emerges. So it'slike, what AI is, is always like
just across the horizon. And sothat's one way to think about
(20:43):
it. But to bring it back down toearth here, AI is basically a
collection of differenttechnologies. Right now, the hot
one is machine learning and deeplearning. And that is allowing
computers to be very good atpattern recognition that
basically what that is, it'sbased on this technology called
neural networks. And so it's atechnique where they try to
model a theory about how thebrain works and how the mind
(21:07):
arises from the brain. So it'sthis very actually materialist
theory that the mind is anepiphenomenon of brain
processes. And so if we can justrecreate a kind of virtual
version of neural networks ofthe neurons in the brain, then
similar intelligence willnaturally arise. And so that's
where we are with machinelearning. Back in the day, like
(21:28):
in the 20th century, there wasanother kind of AI called expert
systems, which was more like,they thought that if they could
just plug in, like millions ofsort of true statements that
that would help createintelligence, like, you know,
men are mortal. And I don'tknow, car tires made of rubber,
and all these things, you'vejust put all these things into a
giant database. They thoughtthat would be AI, but that
(21:49):
didn't work out so well. Soanyways, yeah, that's that's
kind of the long winded answerto AI. What's interesting to the
last thing on that is that AI isreally good at sort of like
playing chess and playing Gofamously at beat the world
champion in chess and the worldchampion and go in the past
couple of decades. So it's verygood at these like, pure reason,
(22:10):
kind of problems, andconstrained problems that are
games, and that kind of thing.
Language is also veryaccessible. But it's not very
good at like walking, you know,seeing the world in certain ways
and interacting with the worldin certain ways.
Unknown (22:26):
Yeah, it's interesting.
It sounds to me like AI is greatat logical activities, but
struggles with a little bit moreoutside of the box activities.
Is that
Chad Woodford (22:38):
currently
currently, yeah, that's right.
Right. And
Unknown (22:41):
I'm sure at some point,
AI will catch up and be more
creative than all of us. But nowwe have a little bit of a head
start. So let's kind of talkabout that, where today, people
are using chat GBT to sendemails, they're using it to come
up with marketing campaigns. AndI think that's what most people
experience of AI in their lives.
What is AI beyond that, like,what else is going on with it?
(23:05):
And also, what are the reasonsthat you think we have to be
excited about AI? today? Yeah,
Chad Woodford (23:13):
I mean, so yeah,
chat. GPT is the popular thing
right now. And it's, that's partof the generator vi explosion
that happened in last fall. Soyou've got generated text,
you've also got generatedimages. So that can be like
Dali, or mid journey, thosekinds of things, stable
diffusion. And so basically,what's happening there is it's
been trained on a very large setof data, like in the case of
texts, millions and millions andmillions, billions, probably, of
(23:36):
documents, and tweets andInternet of Things, has been
trained on that to learn how totalk or how people talk and to
learn a little bit about sometopics. And then it's very good
based on sort of regurgitatingthat kind of what is learned,
basically. And so it's doingthat through this complicated
process that includes a kind ofa statistical phenomenon, where
(23:59):
it's like, yeah, it's a verydetailed kind of thing. But
basically, it's generatingthings that are this simulation
of intelligence. So it doesn'tactually understand, like, what
it's saying, for example, itdoesn't like it doesn't
understand conceptually, orsemantically, like what a
sentence means. It just knowsthat that is a very plausibly
sort of realistic sounding thingthat someone might say. So
(24:22):
that's kind of the state of theart, you know, some people call
it an elaborate autocomplete, orsomething like that. And so
that's the current state of AI.
What AI researchers want tocreate is true intelligence, and
what they call artificialgeneral intelligence, which is
also called super intelligence.
And that's the idea that it's acomputer. It's an artificial
intelligence that couldbasically reason like a person
(24:45):
speak, you know, confidentlyabout any topic, solve problems
in any domain. That's kind ofthe longer term goal. So yeah,
so I think we can talk moreabout that super intelligence
thing in a minute, but thereasons I think I'm excited
about AI is I think it's goingto automate a lot of monotonous
work that people do and free usup to do more creative things,
(25:06):
or it's going to be more of acollaborator with us. So we can
kind of use it alongside of usto create things to solve
problems to do kind of mundanework and that kind of thing. So
that we can then be freed up tofocus on what matters,
community, family, being ofservice, creativity of different
kinds. So I think it's I likethe fact that it's going to be
like a collaborator, I thinkit's in the short term, it's
(25:28):
going to kind of like shift whatpeople do, I think it's gonna
like, you know, a lot of whitecollar workers are going to lose
their jobs. We can talk aboutthat, too. But yeah, and I think
even like, another positive,that might be seen as a negative
is the way that it's going todisrupt technology. So this is a
big unknown. But I think ingeneral, there's this idea in
technology policy ofSchumpeterian destruction or
(25:52):
creative destruction, which isthis idea that the more that
technology, or technologycompanies fail, the better long
term because then new and betterthings arise, it's kind of this,
you know, it's kind of the sameidea with Darwinian evolution,
or just the way that life works,you know, like things die so
that new things can come in. Andso like a I think, is going to
(26:15):
disrupt, for example, Googlesearch, like, I think Google
searches, it's fascinating,because Google's in a tough
position where their bread andbutter is search and search
advertising. But they also seethe writing on the wall in the
sense that they're gonna have tostart replacing traditional
search results with just likekind of a GPT, or barred
(26:37):
equivalent type thing where youjust type in your question, or
what you want to know. And itgives you the answer without
showing you so much the webresults. And so that's going to
affect Google, it's going toaffect their business model is
gonna affect the web and how theweb is designed and created and
content. And then, of course, onthat topic, like, it's going to
come in, and it's going tochange like Wikipedia,
(26:57):
potentially, and the way thatcontent is created, because a
lot of the content now on theweb will be generated by AI. And
so all these things are creatinga lot of chaos on the web right
now. But I think ultimately,that might lead to something
even more interesting.
Unknown (27:13):
more interesting, more
interesting, because why? Yeah,
it's Can you elaborate on that?
Chad Woodford (27:19):
I mean, I don't
know exactly. But I think
there's a lot of challenges.
Without I'll admit, there's alot of challenges, because then
we start to get into, like, howdo we know what's true? How do
we know what's real? There's alot of misinformation risks with
AI generating content. But Ithink I don't have a specific
like, I don't know what's goingto happen on the other side of
this intense disruption that'shappening on the web. And it was
(27:41):
a search. But I have to thinkthat something interesting will
come out of that. Now, again,it's going to create a lot of
disruption in that disruptionincludes, again, jobs, I think,
and so that we can talk aboutthat, because I think that's a
whole other thing that we needto grapple with. Yeah,
Unknown (28:00):
I mean, for me, I think
that's the main reason to be
scared of AI. It's the fact thatall these white collar jobs will
be eliminated. I would thinkwithin a year or two based off
of what I'm seeing, yeah, maybeit's longer, but for trust
reasons, because a lot of humansdon't necessarily trust AI yet.
I think with anything over time,it becomes the way like, what is
(28:24):
that going to look like? Are weheaded towards like a Walley
scenario where people just haveuniversal socialism support, and
they're not working at all,like, what is the future?
Chad Woodford (28:34):
Yeah, that's a
good question. And I want to say
I want to be clear, I'm not likean AI evangelist, I feel it's
very complicated. And I'mpersonally somewhat conflicted
about it. I'm not some kind oflike, naive person who just
thinks that AI is going to begreat. And there's nothing to
worry about. I think there's alot of things to worry about. I
think those things are not somuch. The fact that some super
(28:55):
intelligence is going to killhumanity. But it's more of these
concerns about jobs and theeconomy, and what it means for
that, and also misinformation.
So I think, to answer yourquestion about the jobs and all
that stuff, I think, yeah, Imean, the Wally's scenario is, I
mean, that is dystopian,obviously, and unappealing. But
that actually seems better thansome of the alternatives like,
(29:19):
because that assumes that we'llactually figure out how to
redistribute wealth and supportpeople that I'm not even sure if
we can do that. I don't thinkanybody is sort of grappling so
much with the amount ofdisruption that's coming. I
think it's like a tsunami that'scoming. And I think some people
are sort of aware of it, but notenough people are actively
(29:40):
working to address it. So I hopeat a basic level that we can
achieve, Wally, because thatmeans that we're at least taking
care of people but then thisgets into this conversation
around Universal Basic IncomeUBI and I don't know a ton about
that, but I mean, it seems to melike Like, the conversation
around that entirely revolvesaround your belief about human
(30:04):
nature. And so I think that UBIpeople, they think that they
have a very, very optimisticview of human nature, which is
to say that if you give peoplemoney, if you support them, they
won't just be couch potatoes,watching TV all day, that
they'll actually go out and beproductive and create things
just for the fun of it, just forthe pure joy of it. And that
(30:27):
will result in a whole otherlike, almost like a new
renaissance in a way. So that's,I liked that idea. I'm attracted
to the idea. But it's hard totell based on how people are
right now easily, if you lookaround, a lot of people do watch
TV, and they do kind of numb outin different ways. And they
aren't just like doing artprojects or whatever at home. I
(30:50):
think part of that is becausethe world is too overwhelming.
They don't feel supported incertain ways. And so I think
they do those things, because ithelps them to kind of distract
from these feelings. And thisgets back to kind of another
angle that I could have takenfrom the previous conversation,
(31:11):
which is that I feel like a bigchallenge that we face today is
that the predominant kind offeeling tone of society right
now is fear, in different ways.
And I think, like a big part ofmy mission is to try to address
that in different ways. I thinkif we can get people to either
(31:32):
learn how to move into fear, orto let go of fear that
potentially could changeeverything. So I think the fear
is part of why people are notinherently sort of creative when
they have free time. I thinkthere's so much going on there
emotionally.
Unknown (31:48):
Now, that's
interesting. And I think that's
a worthy tangent of you saidyour kind of your core mission
is related to fear. So what isit about fear, specifically? And
how did that lead you to thisrealm of AI? Is it because you
want to help mitigate the fearsof people as we enter this new
age?
Chad Woodford (32:08):
Yeah, kind of I
mean, it's part of my current
mission is to comfort and informpeople about AI. Because I think
there's a lot of fear around AI.
I think there's a lot of fearmongering happening around AI, I
think, not always intentionally,but I think a lot of people,
like, I don't know if you'veheard of this guy, but there's
this guy, I want to get his nameright. You kowski lazier,
kowski. I think he's runningaround sounding alarm bells,
(32:31):
saying that we're, you know, ifwe don't do something
immediately, something drasticAI is gonna kill us, you know,
and he's been saying this fordecades. And people are
listening, you know, he had aTED talk, I think, last month
where he was saying this, butyeah, there's people going
around, you know, saying thatare these these transhumanists,
who think that the solution toeverything is just for all of us
to upload our consciousness intomachines, and then stop worrying
(32:54):
about being people in bodies.
But yeah, so my mission is tocomfort people and kind of like,
try to counteract a lot of thatconversation that's happening
around, especially superintelligence, but also just in
general, with jobs and all that.
And because I think the way tochange the world in a way is to
kind of help people understandthat there's nothing to be
(33:14):
afraid of ultimately. And themore you can cultivate that
feeling, the more happy andpowerful and successful you're
going to be in your life. Andthat's a big part, it goes back
to yoga, too. I mean, theBhagavad Gita, this book, one of
the most famous books, and yoga,the ultimate message of that
book, is that no experience istoo big for your soul to handle.
(33:37):
And so I just think that ifpeople could have that feeling
like that, feeling deep inside,it would really change the
world. And beside No, by theway, Oppenheimer, the movie,
Oppenheimer was very, very muchin love with the Bhagavad Gita.
So interesting, a little sidenote that
Unknown (33:55):
you think that led to
the way he went about his work,
I mean, and for those who arelistening, you probably know
about the movie that just cameout Oppenheimer. He's the
gentleman who led the project tocreate the atomic bomb, which
was eventually dropped on Japan.
How do you think that impactedhis work? Because I think
that's, that's fascinating thathe was a big fan of this yogic
(34:16):
philosophy that in theory is allabout. I don't want to put words
in your mouth, but in is yourthe Yoga expert, but hopefully
about love about, you know,Universal Consciousness,
everything happening for us, notto us. Yeah, he created probably
the deadliest weapon of alltime. Yeah. So what do you think
he actually got out of thisbook?
(34:40):
Well, yeah, that's interesting.
I mean, you know, I mentionedthat but I don't really know his
whole story. I haven't. Themovie is based on a biography
that's supposed to be quitegood. I don't know the timeline
there. But I do know that he hada lot of regrets about his role
in trading the atomic bomb. AndI think they say the Bhagavad
Gita can be understood on sevenlevels. And so I think there's a
progression. And I think, forthose who don't know, the
(35:03):
Bhagavad Gita is basically thestory of this warrior, or Juna,
who is the book starts out andhe's facing this battle where
he's supposed to fight hiscousins. And he, he just doesn't
want to he throws down hisweapons. And he says, I can't
this is not right. It's notright to commit violence against
people that I know and love.
(35:23):
And, and Krishna comes tocounsel him and teach him yoga,
and to teach him basically, thathe has to live the life that
he's been given and perform thethings that he's going to
perform from a state of love andcompassion and state of yoga. It
takes a lot to kind of unpackall that because it's he does
pick up his weapons and fight inthe battle. And it's a battle
(35:46):
that's happening on like adifferent level. So anyways, I
think Oppenheimer maybe tooksome lessons from that and
thought, Well, I'm an expert inthis field. So I should serve
that role. And you know, theconsequences can be what they
are. But then I think later inlife, he understood the book at
a deeper level and had someregrets about his role in that.
So. Yeah,right. Right. So I guess on that
(36:07):
note, here we are, I guess, 70years later, give or take, and
we have this new technology,artificial intelligence, that is
probably it's going to changethe world in ways we can't even
imagine. Are we headed towardsworld peace or World War Three
right now?
Chad Woodford (36:28):
What a loaded
question. So this, this brings
us into I think, question ofethics, because I think that so
I do agree that not enough isbeing done to shape AI in the
right ways. So I think it alldepends on how we create it, I
think the AI is going to reflectthe mindset of the people who
(36:49):
are making it. And currently,it's mostly being created by
people, engineers, mostly, andproduct managers who have a
certain kind of mindset, butthey're not, it's not being
informed, I feel enough by,let's say, right brain people.
And so you've only got leftbrain people working on this
stuff. And so the technology isgoing to be left brained, I
(37:09):
think if we could, and there'snothing wrong with being left
brain, but I think it's a littlebit one sided, or it's a little
bit like, let's imagine if thewhole world or the whole society
was just engineers, we wouldn'tnecessarily want that they have
a good role, they have animportant role to play. But we
want to have other peopleinvolved, I think, in the
creation of AI, you know, andthat includes ethicists, people
(37:29):
who are trained and ethics,philosophy, that kind of thing,
AI is going to be a reflectionof our humanity. And we want it
to reflect the highest and bestversion of humanity and not the
past, you know, because part ofthe problem with machine
learning, for example, is thatit's trained on essentially,
data that reflects the past. AndI think we can all agree that
(37:52):
maybe we weren't our best selvesin the past, however, you know,
everyone here for you. So Ithink it's important to train it
from the right mindset to trainit on the right things, to help
it to understand human values.
And to incorporate those things,like a good example that's
happening today that I'minspired by is, there's a
company called anthropic, who isa bunch of people from open AI,
(38:15):
who's the IGBT company, theywent and traded a different AI
company, because they wanted tomake the whole mission of their
company, ethical, humanitarian,in a sense, AI, so they have
this alternative AI chatbot,called Claude, and it's been
trained, what they are calling aconstitution. And so they this
(38:36):
thing has a constitution, whichis to say that it has like,
these basic guiding principlesthat have been hard coded into
it, that include, you know, sortof like human rights and sort of
different, you know, differentconcepts about privacy and all
these things that we want to theAI to value. And so I like that
project. That's so that's oneexample of, of how we can sort
of shape and guide AI in adifferent way. One last thing on
(39:00):
that is, there is anotherproject or a couple other
projects where AI is beingtrained on wisdom, or like
spiritual texts and that kind ofthing. And so I'm curious about
that, too. I think that Ihaven't looked into it, but it
might be helpful. It might beinteresting. I don't know.
Unknown (39:17):
Yeah, it's interesting,
because just like different
people are more spiritual,different people are more left
brained, different people aremore right brained, it'll be
interesting to see if therebecomes a universal artificial
intelligence that everyone uses.
Or if people just gravitatetowards the one that aligns with
their current mindsets. It'salmost like we're like turning
towards we have the American AIwe have the North Korean AI, we
(39:40):
have the Switzerland AI, youknow, totally different
philosophies. And that's kind ofscary, because if everyone's
using what they already believe,it's actually feels like it'll
just continue to separate usbecause it'll reinforce what the
way people already feel.
Yeah, I think one of the biggestchallenges we have with doing
what I was just talking about itis two things. It's the
(40:01):
pressures, I think we feelinternationally. So part of the
reason that let's say the USgovernment is not regulating AI
companies is because they're, Ithink, partly because they're
concerned that if we do thatit'll slow us down relative to
China or Russia or even to Indiaor something like that. So I
think this is one part of thereason that AI companies are
(40:25):
just left to kind of selfregulate. And then another thing
is capitalism, you know, I thinkthese companies within the US
even, you know, Microsoft versusGoogle versus, I guess,
Facebook, or meta or whatever,they're all in an arms race to
see who can have kinda like youwere saying to have the most
popular AI. And because of that,I think everyone's rushing, and
(40:46):
I think they're not takingenough precautions.
Right. Right. So I have a fewthoughts on that. Yeah. First of
all, it's better to go slower,in my opinion, then, you know,
put something into the worldthat could have serious negative
implications. And then thesecond thing you brought up
about capitalism, and I thinkthat's especially relevant to
(41:09):
America, like, is capitalism,even sustainable, as all these
technologies take away all thesewhite collar jobs? And I mean,
and I would like to think thatas many jobs will be taken away,
jobs will be created, but thatmight not be the case. Like, is
capitalism even possible in thefuture?
Chad Woodford (41:29):
Yeah. That's a
great question. And it's funny,
because I feel like not too longago, asking that question could
make you sound like, I don'tknow, like a sort of Pollyanna
kind of communist or something,you know, people would criticize
you for, I don't know, being alefty or something. But, but I
feel like part of what AI isdoing. It's making all these
(41:49):
questions kind of crucial andkind of mainstreaming these
questions. I mean, as reclinethe podcaster and New York Times
columnist, he's asking thisquestion on his podcast, you
know, is, is capitalism viable?
Is that part of the problem? AndI don't know the answer to that,
I think it seems like everyone Iknow, is talking about how we're
in late stage capitalism, and itfeels like a thing that could
(42:10):
definitely at least use areboot, or an upgrade or
something, because not to gettoo much into the history and
the philosophy. But as somepeople probably know, capitalism
came out of, in a way, like thesame thinking that that created
materialism. So like a lot ofthe people like the British
philosophers, John Locke, andThomas Hobbes, and all these
(42:33):
people, you know, were the oneswho were part of the materialist
philosophy movement. And socapitalism kind of came out of
that same kind of thinking, youknow, and I think the reason I'm
saying that is because it feelslike we're in a time when all
these ideas that arose in the17th and 18th centuries, are
maybe seeing their expirationdate or something. So I don't
(42:55):
know. I mean, obviously,communism is not the answer,
right? It feels like it's funny,I was just reading for for
school, I was just reading thisthing, where they're talking
about how the theme of the 20thcentury was capitalism versus
communism. And it feels like the21st century is us sort of like
transcending that duality. Andfinding a third way of some
kind. Who knows what it is,though?
Unknown (43:18):
Right, right. So only
time will tell. Yeah. So with
that, I think let's dive intosuper intelligence. And as we've
talked about intelligence, whatis super intelligence?
Chad Woodford (43:29):
Man, this is one
of my favorite topics. And
actually, I've been workingforever it feels like on podcast
and YouTube episode about superintelligence. So I love talking
about this. It's fascinating,because requires you to take a
step back and to ask, like, whatis intelligence? And when
somebody says, you know, AI isgoing to become this super
intelligence, you know, like RayKurzweil, Kurzweil, and people
(43:51):
like that are saying this, it'slike, what does it mean for
something to be far moreintelligent than us? Does that
mean that they're, they're justbetter at like math and science?
Or they're faster, moreefficient at pure reason and
problem solving? Or does it meanlike, if intelligence evolves to
a certain level? Does it becomewisdom instead? You know, like,
(44:13):
what is intelligence? Exactly? Alot of the fears about super
intelligence is that there'll beso smart that they'll hack into
some thing, like they'll figureout some kind of like biological
phenomenon that we don'tunderstand. They'll trade some
kind of like super virus, andthen humanity will be dead or
the the famous, like, fear thatpeople talk about is like,
they'll realize that to maximizetheir resources to create more
(44:35):
of themselves. They'll likemine, all the metals that they
need, the precious metals theyneed and all the precious
minerals, and then they'llrealize that they need it's kind
of like The Matrix. They realizethat they need people to be
resources and all that, but Idon't know that's the case. I
think if they were very smart,they might not be so like, I
(44:56):
don't know, we're just creating.
It feels like we're creating.
We've watched too many Thedystopian movies and we've, we
see this dark side of humanity.
And I think we're almostimagining that it would be it
would be like that it would belike these kind of violent,
mindless things. But I don'tknow, I think it's hard to tell
what a super intelligence wouldbe. And I want to say like,
(45:16):
there's a lot of fear that it'sgoing to happen in the next
year, two years, 10 years. Youknow, I think Ray Kurzweil said,
by the end of the 20s, orsomething, but I'm not
convinced, because we don't havethe technology, I feel. So this
goes back to how I was definingAI. Currently, it's machine
learning. And machine learningis based on inductive reasoning.
And that's only one of threekinds of ways that people think.
(45:41):
And so not to get too far intothis direction. But basically,
the way we think, is, there'sthree kinds, there's deductive
reasoning, where you have ageneral premise, and then you
can apply that to specificcircumstances. So for example,
all men are mortal. Socrates isa man. Therefore, Socrates is
(46:03):
mortal. That's deductivereasoning. Inductive reasoning
is just noticing patterns anddrawing general conclusions from
those patterns. That's machinelearning. And so yeah, if you
show a computer, a millionphotos of dogs, they're going to
eventually realize that thisblobby thing that has hair and
two eyes, and whatever is a dog.
(46:25):
But that's just one another kindof reasoning. But the third kind
of reasoning that AI has not yetaddressed is abductive
reasoning. And that's where wehave these flashes of insights.
And we have inspiration. Andthis is the kind of thinking
that happens with inventors anddetectives, and those kinds of
(46:46):
people who have been spending alot of time with a problem. And
then all of a sudden, one day,the solution emerges. And
there's no way to like that wehave come across so far to
automate that process. And sountil I feel this is maybe a
controversial opinion, I don'tknow. But I feel like until
we've addressed abductivereasoning, there's not going to
be super intelligence, becausesuper intelligence requires an
(47:11):
understanding of semantics andunderstanding of cause and
effect requires a map of theworld and how the world works.
And if you look closely enoughat machine learning, and that
these machine learning programslike chat, GPT, they're very
brittle, like, they don'tactually understand what they're
saying, they don't actuallyunderstand how the world works.
And so they're very easy to liketo throw off. And so it's like a
(47:32):
magic trick, you know, they're,they seem highly intelligent.
But then as soon as you presentthem with some problem or
challenge that's outside therealm of what they've been
trained on, they they fallapart. So anyways, I think,
super intelligence, we might getthere, but I don't think we have
the technology yet to to makeit. And I'm not convinced that
when we do make it, I'm notconvinced that it will be
(47:52):
inherently like evil orsomething.
Unknown (47:55):
Right, right. And
that's abductive reasoning. How
does that compare to like thehuman version of intuition or
subconscious?
Chad Woodford (48:03):
Yeah. Well,
that's interesting question,
actually. So I think it isrelated in some sense. And I
think once you get into theconversation about intuition in
the subconscious, that gets intoa whole other series of
questions about where thinkinghappens in the inner person. So
I think part of this too, is thethinking is happening, the mind
(48:27):
is not strictly located in thebrain, in my opinion, I think
that the mind is actually spreadthroughout the whole body, as a
field. And I think the brain isthe connection between the mind
and the consciousness we weretalking about earlier. That's a
very idealistic philosophy. ButI think, intuition and
(48:49):
unconscious that Carl Jung mighthave talked about, are coming
from places in that greatermind, or that greater sort of
awareness that are coming fromsomewhere that is not strictly
like neurons firing in the brainis how I feel. So that's a whole
other conversation about sort oflike, where the mind is located
(49:09):
and how it arises. Becauseagain, in conventional AI theory
is in conventional neuroscienceto the the theory is that the
mind is arising as anepiphenomenon of the brain. But
this is actually the hardproblem of consciousness that
they this is the kind of whatthey call it an AI. The hard
problem of consciousness islike, how is it created from raw
(49:30):
matter? Like, how doesconsciousness arise? Or how
could it arise from matter? Andit's called the hard problem
because we don't even begin toknow the answer to that
question. Like, we can create acomputer that simulates sort of
Access Consciousness is whatthey call it, which is like, I
see a symbol I see a letter andthen I process that and then
there's an output that's like,what we get what we can
simulate, but the experience ofhaving like tasting a strawberry
(49:55):
or watching a sunset orlistening to music, a song that
really moves Do we have no ideawhat's going on there? I mean,
there's a the way they say it inphilosophy is like, there is the
experience of being Brian, butlike, how can we possibly
recreate that in a machine?
Unknown (50:12):
Right? And that makes
me think of like, Dr. Keltner is
work on all and how we createfeel out of these feelings of
like, oh, where it's almost likewe have a sixth sense. That's,
you can't really bring that backto logic or reason. It's just
something that happens and howcan aI have that experience? If
(50:34):
they're only being programmedthrough a state of practical
logic or, or things that areexplainable?
Chad Woodford (50:41):
Yes, I Yeah,
exactly. Exactly. I think this
goes back to how I think that AIis forcing us to face all these
questions that we've been kindof putting off for a long time,
like, going back to Descartes,again, he just decided basically
like, oh, there's somethinggoing on with the mind or the
the soul or the consciousness,who knows what it is, let's put
it aside and just focus onmatter. And so we've never
(51:01):
really grappled with thatdivision. And I think we are
being forced to now I think, itmight get to the point where we
realize, oh, the reason we can'tcreate AI, super intelligence or
conscious AI is becauseconsciousness is what we are.
And the brain is just helping usto tap into the thing that we
are. And that's the case, thenwe may never be able to create
(51:25):
an AI that's like us. Or if wedo it has to be with a different
technique. I actually am veryinterested in questions of
quantum computing and AI becauseRoger Penrose, who's famous for
being one of Stephen Hawking'scolleagues, he had a theory with
this other guy, he was aneuroscientist, that the mind
and consciousness is actuallycreated somewhere. And in the
(51:47):
quantum quantum mechanics, likeif you look deeply enough inside
these proteins in the brain,maybe something's happening at
the quantum level inside thoseproteins that's creating it. So
maybe quantum computing couldget us there. I don't know.
Unknown (52:02):
Yeah, that's
fascinating. And, you know, it
sounds to me like, it will bevery challenging to get to the
point where we can confidentlysay that AI is conscious, or the
super intelligence as a level ofconsciousness. Do you see a
world where, you know, we'rehalf humans, half robots, and
you know, the, the AI is usingour consciousness as humans, we
(52:26):
have these chips in our arms orsomething. And you know, you've
seen this in movies. But aswe're talking about this now, in
my head, it's like, it doesn'tseem that far fetched because
the AI kind of needs ourconsciousness.
Chad Woodford (52:36):
Yeah, that's a
great question. You're getting
into like, another fascinatingtopic, which is transhumanism.
And there's a whole movement.
For those who aren't aware,there's a whole movement
primarily based out of Bay Areain San Francisco, of
transhumanists, who believe thatthe future is us merging with
machines and augmentingourselves in different ways with
machines or becoming machines insome way. And that's the
(52:57):
solution to saving the humanrace. That's the solution to
like longevity and immortalityand all this stuff. And that is
so fascinating. Because I think,again, it's coming from the
materialist worldview, in asense, but also, yeah, I think
so Transhumanism is likesneaking spirituality back in
through the backdoor, like, theydon't want to talk about
(53:21):
spirituality, they don't want toacknowledge these things. But
then they're sort of sayingthings like, well, the mind is
just a pattern, a large patternthat we can somehow upload into
a machine, and that it can liveon beyond that. And it's like,
if you really, like if you justchange some of the words,
they're kind of talking aboutthe soul, like they're kind of
bringing the soul back inthrough, like technoscience. So
(53:42):
that is fascinating to me. Butpeople who don't aren't aware
should know that, to a largeextent Silicon Valley is ruled
and run by transhumanists. SoElon Musk, for example, is a
transhumanist. And so this, Ithink, is potentially a problem
because a lot of the decisionsthat are being made in terms of
(54:02):
company policies, and what tofocus on, are being made based
on this idea that we're going tostart to merge with machines.
And I just want to say, Ipersonally feel like one of the
main themes in this conversationabout AI and super intelligence
and all this, in this idea thatwe can like, just kind of
relinquish our responsibility tosolving our big problems to AI
(54:24):
is that it diminishes the roleof the human. And I think we
need to, like somehow rediscovera belief in the potential of
people. I think that people canbe brilliant and can solve all
these problems. I don't think weneed to sort of, I don't know,
handover our our agency and ourresponsibility and our potential
(54:47):
to machines.
Unknown (54:50):
It's interesting, I had
a conversation about that
yesterday, where it's like, itwas on a very simple level, but
for example, like ahead ofhappiness, we have weekly blog
posts and It's like, sure Icould have artificial
intelligence, write my weeklyblog posts and get to know my
tone by reading past blog postsand save me a few hours a week.
(55:10):
But that doesn't mean I can't doit a and b, I actually get a lot
out of that process. I enjoy thejourney, I enjoy having to
synthesize my thoughts, and itforces me to observe my reality
in a certain way. Absolutely.
There's, it's almost like, sureAI can do all these things. But
by letting AI take over allthese processes from us, is it
(55:30):
really enhancing the humanexperience? And to that note, is
it really enhancing ourhappiness?
Chad Woodford (55:37):
Yeah, yeah, it's
a good question. I, I think
there might be like a middle waythere somehow. Because, for
example, like when I createpodcast episodes, or YouTube
content, I will use AI to, forexample, suggest like 10 titles
that might work for the episode,and then I'll look at those. And
maybe I'll use one, maybe one ofthose will be inspiration for
(55:58):
the ultimate title. But I don'thave any problem doing that. And
like that, in a sense, I don'tknow like, yes, it makes you
feel a little bit less creative,maybe. But also, I think it just
moves like it just allows you totake your creativity to a
different direction like you,you can then focus your
creativity on some other aspectsof what you're making, and not
(56:19):
spend too much time on thetitle. I don't know, I think,
right. There are certain placeswhere AI can have a role, I
think,
Unknown (56:26):
right, it allows you to
prioritize where you want to be
creative. Yeah, at the end ofthe day, we are all limited on
time. So choose how to use yourtime wisely. I think so i think
so to kind of land this plane.
You know, we've talked about alot of the pros of AI, we've
talked about a few of the consand a few things that are
terrifying, right? I don't wantour listeners to leave thinking
the world is about to end. Sowhat what is what would you say
(56:47):
is, you know, to sum it up themain reason for optimism, and
the main action that people canbe taking to future proof
themselves in this AI one. Yeah,
Chad Woodford (56:59):
I think a lot of
it is still unknown. So but I
think the reason for optimism isthat there's a lot to say there.
But at a basic level, thereseems to be if you look back
over history, there seems to bean arc to human evolution. And
that arc, you know, I'm kind ofinspired by a little bit by
Martin Luther King, who wasactually quoting somebody else.
(57:22):
But you know, the arc of theuniverse bends towards ultimate
progress for humanity. You know,this, of course, brings in a
little bit of idealism and yogicphilosophy in the sense that
there's so much more going onthan just the things that people
are doing. I think that naturehas an intelligence. And I think
(57:42):
that, you know, nature is, has aproject it's working on. And it
seems like nature is suffering,because we're in this climate
change era. But I think there isa sort of death and rebirth
process happening with humanityright now. Sorry, this is a long
answer. But But basically, like,there does seem to be, you know,
like, there's something deepergoing on that we maybe don't
(58:05):
understand. So I think in termsof like, people surviving
humanity surviving, I think wewill, I think it is it might
require a lot of it's hard rightnow, I think this time we're
going through is challenging formost people in different ways.
So I think the reason to beoptimistic, is that that's a
very deep kind of philosophicalanswer. But also, because I
(58:25):
think that AI is, like I wassaying earlier, it's it's going
to help us solve a lot ofproblems in collaboration with
ourselves, it's going to free usup from certain things. And then
the only question is like, howdo we what do we do about people
who are displaced by AI whohaven't properly prepared for
that displacement? So that getsinto like, how can you prepare
(58:47):
more specifically for AI? Idon't know exactly. But I do
think that things you can doinclude familiarizing yourself
with AI, and how to use it.
Well, I don't think we should belike, I mean, not everyone needs
to do that. But if you have theinclination, if you have the
interest, learn how to use thesetools, try to stay abreast of
what's happening. And to theextent you can like, if you know
(59:10):
somebody who's working in AI, oryou work in a company that's
creating AI, I don't know, Ithink I would like to see more
people try to bring like rightbrain thinking into the
development of AI. And that isto say, like, it's hard because
people need to make a living.
But I would like to see morepeople going into humanities and
(59:32):
not so many going into STEM, youknow, science, technology,
engineering. And so, I think weneed both. I think we're in a
time where we need to havepeople who are coming from the
humanities side. So yeah, Imean, I don't know. So it's, if
you're worried about it, learnhow to use it and also work on
yourself. Like learn how to behappy with from the inside,
which is a lot of what you talkabout, cultivate happiness from
(59:54):
the inside and stop trying tofind it through some external
thing.
Unknown (59:58):
That's beautiful. judo
and I really do like the idea
that you talked about thisearlier in the podcast of like,
maybe we are on the precipice ofa renaissance, where, you know,
because more of AI is going totake over some of the stem type
of work in this world. Yeah, youknow, it really does give us the
space to write and read and drawand, you know, really tap into
(01:00:22):
the creators within all of us,which is an exciting future.
Because I think that's, in manyways, a key to feeling alive and
feeling happy.
Chad Woodford (01:00:30):
Yeah, totally.
Yeah, I mean, I think becauselike, you know, AI seems
creative, but it can only createthings based on what people have
already created. So it gives usan opportunity to create new
thing.
Unknown (01:00:41):
Yeah, I love that. I
love that. So for anyone
listening, you know, when to putdown this podcast? Why don't you
go draw? Why don't you go, youknow, use the other side of your
brain for a little Yeah. 100% isanything get out of this. But
alright, so this was awesome.
Thank you so much, Chad, I thinkI learned a ton about AI, which
I think just learning about ithelps to get rid of the fear. If
(01:01:03):
you don't know anything. It's ascary concept. Because you know
that it's changing our livesdramatically. You don't really
know how and I think thisdebunked a lot of a lot of that
for me. So I feel educated andgood. listeners do as well. I
guess one last question beforewe wrap it up. You know, it's
one thing that struck me by justspeaking with you, both in the
(01:01:24):
past and today is just how muchknowledge you have amassed and
how you've taken, you know, allthese journeys, you're a lawyer,
you are an engineer, or a yogi,you know, you're just have this
love for learning and thisimmense curiosity like, where
did this come from a and b,like, who are your role models
are like the most inspirationalpeople you've met along that
(01:01:46):
journey?
Chad Woodford (01:01:48):
Yeah. So you're
asking where my curiosity came
from? Is that or Yeah, I'd love
Unknown (01:01:53):
to know, like, I think
a lot of us and I guess, to give
some context on where thatquestions come from, yeah, a lot
of us are looking for our thinga lot of us are looking for, you
know, the thing that sparks usthe thing that makes us feel
alive. And I think you're like,I know you're not a millennial,
but you know, the modernmillennial jumps from job to job
to job to job, you've gone onall these routes, trying to find
(01:02:14):
your way and find that thingthat makes you happy makes you
feel alive. But it seems to melike you've been following your
curiosity from before it wascool to follow your curiosity,
because you know, yeah, I thinkthe older older generations are
used to staying at a job for 40years and then retiring. Right,
and, you know, kind of just thatlike cookie cutter American
dream, right? I think it's coolhow you broke off that path at a
(01:02:36):
much earlier point in your life?
And like, are there people thathelped you with that? That's
kind of where my head is.
Chad Woodford (01:02:42):
Yeah, I guess I
was kinda like an early
prototype of millennial. So I'vealways been curious. But I
think, you know, before I evenknew about Joseph Campbell, I
was kind of following hisadvice, which is, I'm sure
people have heard this, JosephCampbell, who is this great
mythologist, who reallyhighlighted and kind of created
the idea of the hero's journey,or just identified it as a
(01:03:04):
recurring theme and worldmythology. He said, Just follow
your bliss. And I know a lot ofpeople say that it's like a
catchphrase. And you know, it'sbeen kind of like,
misinterpreted, I think, too,but what he meant was, he was
actually a yogi. So he, hestudied yogic philosophy early
in his life. And so there's thisidea in yoga, that the truth of
yourself is, is that yourconsciousness and the truth,
(01:03:27):
like the nature of consciousnessis bliss. And so what he meant
by that was that, if you followyour bliss, if you just do,
like, follow, like, do whateveris interesting to you, and
whatever makes you feelblissful, or whatever makes you
feel happy, then you'll haveaccess, like expand your
consciousness. And that willlike once you're more tapped
(01:03:50):
into unity consciousness from ayogic standpoint, the more
things will flow in your life.
And the more it's like, the moresupport you'll get, the more
fulfillment you'll get. And so,you know, following your bliss
is not this thing. We're like,sort of checked out and unattach
from the world, you're you'redeeply immersed in the world in
a way that's passionate andwanting to be of service and
connected to people. So my rolemodels, I was thinking about
(01:04:14):
this. To some extent, they'reall like, dead white guys. But
yeah, I mean, Joseph Campbell,and then Carl Jung, just because
he was like, way ahead of histime. He was thinking about and
doing things that were sooutside the mainstream, that it
required, like an immense act ofcourage to do this. And I don't
know how much people know aboutCarl Jung, but he was into
(01:04:35):
astrology and synchronicity andall this stuff, like in the
early part of the 20th century,and so he was like really
pushing the envelope, just doingit like he didn't worry about
whether it would be I mean, hedid a little bit worried about
whether it'd be accepted and heslowly kind of rolled this stuff
out. But at the same time, hewas a real Trailblazer. And so
in the same sense, ROM das I'msure your listeners know of ROM
(01:04:57):
das Another example of a guy whowas totally caught in like the
square kind of condition,Western world, he was a Harvard
professor. And then he just, youknow, went to India and became
Ron Das. And then Rick tarnis,one of my professors, other role
models that he's, he's pioneeredwith Stan Grof, this archetypal
astrology and archetypalcosmology, and he's done all
(01:05:21):
these things in a way that'svery accessible to people, I
think. And then lastly, I willsay, I'll say Steve Jobs
because, you know, as much of adifficult person as he was, and
as much anger as he had, he alsobelieved in the potential of
humanity. And he was the one whoI think identified early on,
like, the potential for, like,using technology in a way that
(01:05:43):
elevates the human. I think thatwas his whole life purpose. And
that goes back to our wholeconversation, too. So. So yeah,
Steve Jobs, I think, was a realvisionary. And he was also a
person who was in love withIndia. And, you know, he said
that doing psychedelics reallywas what created his entire
vision and mission. So he wasvery like early in a lot of
these things.
Unknown (01:06:04):
I love that. I love
that and I waiting for the day
Chad, where you know, your nameis in that category. Now.
There's Carl jaag. That SteveJobs there's rom Das. And
there's Chad Woodford, I don'tknow about. I don't know. We'll
see. We'll see how your hero'sjourney unfolds. But this was a
lot of fun. Chad, thank you forall these insights. Thank you. I
(01:06:25):
know that I'm gonna keep onreaching out to you. As the
world unfolds and unfolds, andI'm either of my fears or my
excitement. I'm gonna be like,can you confirm this, Chad?
Chad Woodford (01:06:33):
Yeah, happy to do
it.
Unknown (01:06:35):
I appreciate that. So
I'm sure our listeners are just
as intrigued if they want tofollow your journey or follow
your YouTube channel or whateverelse. Where can they find you?
Yeah, so on Instagram andthreads. I'm cosmic wit, cosmic
wit. And then my website iscosmic dot diamonds. And the
podcast is cosmic intelligence.
And I guess you can still findme on Twitter or x or whatever.
(01:06:57):
It's called now at CHD. Butyeah, that's my address is
beautiful. So we will get thoseall linked into the show notes.
Chad, this was a blast. Thankyou so much. It's much fun.
We'll have to sync up againsoon. Yeah, definitely. Yeah.
And then thanks for spreadingsome knowledge and happiness
with our audience. Talk to youlater.
Chad Woodford (01:07:17):
It's my passion.
Thanks, Brian. Take care.