All Episodes

August 23, 2023 • 52 mins

In this episode my good friend Aleks Svetski joins me again, this time to discuss how most AI fears are a red herring. There are real concerns and risks with artificial intelligence, but most people are looking the wrong way and will never see it coming. Fortunately, there is also a solution. Visit spiritofsatoshi.ai to be part of the solution.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
What's up, everybody. Welcome to financial heresy, where we talk
about how money works so that you can make more,
keep more, and give more. Today, I've got a guest
on the channel who has been here a couple of times.
Repeat guest Alex Svetsky, good friend of mine, and we're
having a great conversation here today. I'm really excited for
you because we're talking about artificial intelligence, specifically number one,

(00:23):
about the main fears that are being pushed about about
AI and how they're completely unfounded. And then number two,
what the real risk is that is kind of being
distracted from, like you know, don't look at the man
behind the curtain, Like what is the real risk with
AI that people are being distracted from? And then finally,
which I think is most important, Alex is building a solution,

(00:48):
an alternative so that people don't have to get sucked
in with the crowd. So really excited here, got a
good episode for you today. Thanks Alex for joining all right, Alex, well,
thank you so much for joining me today. Really excited
to have this conversation with you.

Speaker 2 (01:02):
Joe. Good to see again, man, it's been a while.

Speaker 1 (01:05):
It has been a while. We've spoken in the past,
about some things that you've always had a little bit
of a controversial, a little bit of a an anti
consensus opinion on things. I remember our first conversation was
when everybody was going all in on NFTs and decentralized
social media networks and you said, no, no, no, we need

(01:27):
to we need to fix them, like that's that's not
the solution, that's a non solution for you know, the
wrong problem. It turns out all that was right, everything
like that that you were talking about has been crashing,
and now your latest latest endeavor is attacking a much
bigger dragon with artificial intelligence. So what is it that
you've been you've been building and looking into and how

(01:50):
are we how are we getting this AI thing all wrong?

Speaker 2 (01:53):
Dude? Thank you for that intro. And yeah, let's uh,
let's let's try and get some context around the AI
things that people can be on the same page. I
guess I did. I did actually podcast on my own
show roughly two years ago now with with the guy
out of San Francisco called Rob Malco, and we were

(02:15):
supposed to talk about AI. We spoke a little bit
about GPT two back then, which was you know, long
before chat GPT and chat Jipt's orders of magnitude more
effective and useful and powerful than that was, and we
kind of ended up like glossing over it. Nobody really
gave a shit, and we ended up talking about his
life story growing up to deaf parents and went down

(02:36):
the Nietzure rabbit hole and all sorts of other stuff.
So kind of like it was one of those things.
You know, when you first hear about bitcoin twenty twelve,
we're like, oh, yeah, cool, whatever, and you know, you
move on to other other interesting stuff. So obviously, you know,
November last year came around, chat GPT landed and kind
of you know, January February March particularly like blew up,

(02:56):
and you know everyone's running around with a new toy,
new hysteria, and you know, alongside that hysterian exubrism, you
get all sorts of stuff that comes out of it.
You know, everything from the yuval Harari wet dream of
you know, becoming a brain in a vat and computers

(03:18):
that are going to run everything and everything is you
know going to be hacked and all this sort of
stuff through to you know, the preppers, We're all going
to die. AI is going to kill us all terminator
blah blah blah to everything between right and for me,
it was interesting. I when I saw chat tipt come out,
like it reminded me of that conversation that Robin I had,

(03:38):
I was like, oh fuck, you know, and it was
so much like Bitcoin. I was like, damn, I remember
Max kais are jumping up and down on the couch
in twenty twelve, ignored it twenty fifteen, twenty sixteen, sort again,
I was like, oh shit, it's something here. So I
did the same thing with this, and I just went
down the rabbit hole, you know. Between November and February March,
I did like my thousand hours digging into what is

(03:59):
a what is intelligence? What does this mean? You know?
And the more and more I dug, Like I even
started writing a little blog on it called Authentic Intelligence
on sub stuck, and I was just sort of digging
through this stuff. And one of the big things that
I found, like I did some polls on Twitter I
was talking to people, was that most people weren't sort

(04:20):
of afraid, like they sort of know that AI is
all over the place, like when you order something on Uber,
like AI is essentially you know, algorithms decide which car
is going to come pick you up. You know, algorithms
decide what stuff you're fed on search engines, on Twitter,
on Instagram and all this sort of stuff. So so

(04:40):
we're kind of living in the age of quote unquote
AI anyway, but this language model thing was a big
shift for people because I think it's the first time
in history, other than like a parrot, that we've been
able to talk to something and talk back, right, And

(05:02):
it kind of made sense, like, tell me a previous
time in history that we've ever done that we haven't really, right,
And I use a parrot kind of as a joke,
but it's honestly the case, like a parrot is the
only thing that's ever talked back to us. I've kind
of understood it, like or you know, when you see
those dogs that they've trained up to kind of say
a few words, we're like, oh my god, it's so intelligent. Right,
So we've got these programs now which can string to

(05:25):
get a language, which are essentially you know, we said
this offline, but they're like a sophisticated auto complete, and
you know, people's imagination caught fire and they're like, holy shit,
artificial general intelligence, which is like the big holy grail
of AI, right, and you see open AI and all
these guys talking about it. That's like that's the number

(05:45):
one goal. But people started to think, okay, and they
are still thinking that this is the dawn of real
sentient intelligent machines that are somehow going to either take
of the world absolute humanity. You know, name the threat,
the name the issue, name the problem. So I started

(06:10):
going down the rabbit hole and asked, basically, the big
question is like, okay, if everyone's afraid of, like you know,
what is everyone afraid of? Okay, it's not really AI,
it's actually AGI. It's sentient intelligence, like another alien type
of intelligence that is more powerful than us. Okay, So
if that's the fear, all right, what does that actually mean?
So I tried to define artificial general intelligence and what

(06:31):
I found difficult was, wait a minute, we don't even
have a consensus on what the word intelligence means. You know,
that's first and foremost like still nebulous, and more importantly
than that is like, you know, we might be able
to get you know, some sort of consensus. Intelligence is
some sort of pattern recognition and probabilities and this and that.

(06:54):
But then okay, if that's some rough estimation or conceptualization
of what intellligences, then how many intelligences are there? Well,
I've obviously got cognitive intelligence, but if you look at
like the human being, you've got uh, you've got emotional intelligence,
you've got hormonal endocrine intelligence, you've got intelligence, and your

(07:15):
muscles intelligence, and your gut, you've got neurons all throughout
the body. You've got intelligence in the way your bones form.
You've got intuitive intelligence, like you've got all you know,
the metaphysical kind of intelligence, Like that's a bit more spiritual,
instinctual in nature. Like how the fuck do you even
like count all of those and define all of those
and work through all of those. And I what I

(07:36):
kind of came to realize, and I wrote about this
in those essays was man like, cognitive intelligence itself is
like a you know, super broad concept, and language models
they're just like a sliver. Like all we've really discovered
with these new AI, these new language models is that

(07:58):
language is actually more hat and recognition then it is intelligence,
and that that's something we didn't realize before. So, like
I went back and read some old aibooks like Nick
Bostrom's Superintelligence, and in there he was adamant that by
the time computers work out how to speak, they will
be already artificially generally intelligent, and they will have hit

(08:20):
that that escape velocity, and by the time we realize
that we can talk to machines, they will run the world.
Well he was one hundred percent wrung on that, right. So,
you know what, the only thing we've realized out of
this whole sort of modern air experiment now with language
models is that language is not as intelligence. You know,

(08:42):
it's not as deep as we thought it was. It
was relatively probabilistic, same as imagery and art, like you know,
these kind of diffusion and daily, daily two and mid
journey and everything like that. They're just like you know,
putting pixels together in probabilistic fashion that you know, we
recognize a few huan beings. So anyway, I went on
this whole rant about like, hey, AGI is actually a scam.

(09:08):
It's probably not going to happen in our lifetimes. If ever,
because we are so far from just cognitive cerebral intelligence,
let al and general intelligence that I think there's a
there's a deeper problem here that people who know better
are using a GI as red herring for So I'll
stop there for a moment. You know, maybe we can
pull on a couple threads there, but that's sort of

(09:30):
like trying to debunk like the fear is AGI, but
that is a that is a it's a nebulous fear.
It's undefined, very similar to climate change, right, the climate's
going to kill us all. We don't know how, why, what,
But like it's it's it's it's freaky enough and people
can't understand it enough that it's like worthwhile creating another

(09:53):
regulatory body to manage it and protect us from that
and get more taxes out of us for the purpose
of saving the world. Again.

Speaker 1 (10:01):
So basically, to summarize, if I understand this first fear correctly,
it's that that people are afraid that that we're going
to have a sky net terminator type of event and
at some point the computer is going to become smarter
than us, and whatever objective we have given to it,

(10:22):
it could potentially figure out a way to use that
objective against humanity. But your point is that intelligence is
so little understood, and what we do understand about it
we already know that it's way, way, way, way way
bigger than anything artificial general intelligence, or anything that we've

(10:45):
been able to even come close to accomplishing so far.
You talked about language being more pattern recognition. Then you
also mentioned this is something I think the reason why,
like full self driving with Tesla has been delayed, you know,
like a decade now that object recognition is not actually
something that's it seems like possible with computers. You you

(11:07):
really need tool recognition, which is something even deeper, fundamentally
embedded into huge humanity psyche that we don't understand how
to translate that to computers. It also assumes that that
all the intelligence humans have is one, uh, what's the

(11:28):
word biological. It assumes that there's nothing like transcendent and
we don't need to get you know, get into the
you know, any any sort of a spiritual argument here.
But it does assume that there's nothing there's no.

Speaker 2 (11:42):
Mind, there's no meta that's in charge.

Speaker 1 (11:44):
Of the brain, you know what I mean, Because if
there is, then that completely throws out the whole like
how would you be able to build that with you know, electrodes.
So basically this whole, the whole fear mongering of AI
is about to take over is not is not the
thing that people should really be concerned about.

Speaker 2 (12:06):
Exactly exactly. I mean, there's before we get into what
people should actually be concerned about, because I think there
is actually a genuine, genuine danger here, which is a
very different kind. But there. I was listening to a
podcast by I think it was John Vivek or something,
and he made a claim which I really really agree with.
He just articulated so well as something along the lines

(12:26):
of we humans might be the universe's peak form of
general intelligence, right, because what people are finding with these
language models and stuff like that, they're like, oh, you know,
like the typical notes. You know, they're like, oh, if
we just string together all of these different models and
get each model doing a different thing, and then you know,
you place a governing model that selects the different models,

(12:48):
and then you know, you have them clustered, and then
another governing model, like so basically building up you know,
intelligence hierarchy. What they find is that compute resources and
everything goes through the fucking roof. Complexity go through the roof.
The actual whole system slows down, and you get these
kind of like dis economies of scale. So you know,
it seems like humans in some way, like we have

(13:10):
all of these embedded layers upon layers upon layers upon
layers of intelligence, and we're just like sort of the
right amount of everything that Yeah, we can't beat a
computer specifically on math, right, but hey, we're fucking good
enough at math, and we're good enough dexteriously like we
can do something like so you know, computers, can you
beat us in narrow domains or narrow dimensions, But as

(13:34):
soon as you try and make the ultimate computer to
do everything better than all of us, you'll fail. Like,
you know, maybe the end point is actually we end
up creating another human being and like, okay, well you know, like.

Speaker 1 (13:46):
So you know, we should have just gone and had babies,
right right, that makes sense, which okay, and that's a
that's a good point, which because this brings up the
other the other fear about this you've talked about, how
you know, we build one specific program that can do
one thing way better than any human can you know

(14:08):
it can it can be at a set chess, it
can do you know, you math, it can do whatever
it is. We could build a program that can be
better than than than people. At this first, I'd like to,
you know, make the note that that's all technology is
and it's all it's ever been. I mean, fire can
can get nutrients better than humans can by foraging, like

(14:29):
you know, a spear or a tractor can can can
do work better than humans. I mean, this is that's
what human progress is. It's finding tools better than better
than better than we do. But that's the that's one
of the main fears. Even without a g I the
Holy Grail. People are afraid all they're gonna take my jobs.
I'm not gonna have my cashier job. I'm not gonna
have my truck driver job. I'm not gonna have my
lawyer job. I'm not gonna have my doctor job, whatever

(14:50):
the job is. We fear that, hey, this tool is
going to replace me. Is that a Is that a
fear that is unfounded? Is that, uh, you know, gonna happen.
But it's good? Is it not gonna happen? What are
your thoughts on jobs being replaced?

Speaker 2 (15:04):
Yeah? Good one. So I think I haven't finished this
article yet, but I was writing an article called Midwit
Obsolescence Technology, and I was like, hey, I should be
renamed to this, right, because you know a lot of
this these modern language models, particularly the mainstream ones like
your chatjepets, they're very good at basically just regurgitating mainstream language.

Speaker 1 (15:28):
Right.

Speaker 2 (15:28):
So, like, if you're a if you're like a vice
reporter or you know, working for the CNBC and you
know you're writing like this basic run of the mill
journo stuff, you're fucked. Chat jipt can do that for you.
But if you're someone with an opinion outside of the
the accepted Overton window, you know, if you're not a

(15:49):
midwin for example, you're fine. Like I tried to use
chat jupet to help me write some stuff and I'm like, fuck,
this thing sucks. Then now you can prompt and conjole
it and like try and like you know, do this
and that, like you know, clean this and act like
nature or speak like Jordan Peters. But you know, it's
just that hours you spend on trying to prompt youry
into something useful, you could have just written it yourself,

(16:11):
and you may as well practice that. You may as
well use that muscle in your head because if you
don't use it, you lose it, right, So you know,
beyond that, So beyond this sort of like language tasks,
because you know, they were always saying, oh, you know,
AI will replace rote tasks before it replaces creative tasks. Now,
you know, writing is somewhat creative, and you know, maybe

(16:32):
like this, as I said, this midwiit type of creative stuff.
You know that the general mainstream ship is probably going
to get replaced by these you know, probability machines. So
so those people should be afraid. But you know, things
like cashier's truck drivers and stuff like that. You know,
sure we'll have automation, you know, and this is you know,
you could you could place I would actually place AI

(16:55):
underneath automation as opposed to automation under AI. But you know,
technology and automation has always been a goal and trying
to do more with less, and that will continue. The
thing is a lot of these shifts are largely cultural.
And as you know, and I think as anyone listening
to this nose is, human beings are incredibly good at

(17:19):
finding something else to do with their time when they
have some free time. And this really just comes back
to the kind of human being you are and how
intentional you are about that spare time. So there's two
threads I want to pull on. Here is number one.
Let's say there is some genuinely useful bits of AI

(17:41):
you know that at that or tools you know. Let's
say there's some you know, more based language models you know,
kind of like what we're trying to build now, et cetera.
And you might, as an intentional person, use that to
cut down maybe a writing time on a daily basis
from three hours down to two hours or one hour
or something like that. I use something to help, you know,

(18:02):
automate your emails, this and that. Let's say you cut
your working day down, you can be as effective in
four hour four hours instead of eight. Well, what do
you then do? Do you spend the rest of the
time on social media? Do you go on Netflix?

Speaker 1 (18:16):
You know?

Speaker 2 (18:17):
Do you watch porn? You know? What are you doing?
Or are you maybe feeding your mind another way? Or
are you actually going out there and using your hands?
Are you using your body? Do you like go and
do some jiu jitsu? Like do you go for a run?
Do you go to the gym?

Speaker 1 (18:29):
Like?

Speaker 2 (18:30):
We will open up time for other things and people
are intentional about it, we'll use that. Most people, I
actually think are just going to basically, you know, if
there is these tools which automate a bunch of stuff
and people end up with more time, my guess is
you know that's where things like you know, the Metaverse,
virtual reality, Netflix, Uber eats blah blah blah. Like it's

(18:50):
that kind of the long house, right the you know,
they'll they'll just distract themselves into obliviing and you know
the Wally movie. Basically, So I I don't think it's
a you know, automation has always done this and it's
always created more free time, which we never end up
with free time. We just fill it with something and

(19:11):
it's it's the same old story as like what are
you gonna feel it with something useful, productive, growth, blah
blah blah blah blah, you know, or is it going
to be in the bucket of you know, wasting yourself away,
wasting your time, wasting your energy, wasting your vitality, wasting
your seat, et cetera. And you know it'll be people
are just gonna be confronted with the exact same fucking question.
I mean, just just think about Like chachipt came out

(19:33):
six seven months ago and like as much as I
bag it out sometimes like it is still a fundamentally
profound thing, Like I'm typing to a fucking computer and
the computer talking back to me as much as it
sounds like a little midwit and it's apologetic and all
this sort of shit, like that's fucking cool. People already
bored with it, man, like that. This is like, you know,
so people have already found other things to fill their
time with. So that's what the human condition is. It

(19:55):
gets used to a situation and it fills its time again.
It's never gonna change.

Speaker 1 (20:01):
Yeah, yeah, And the fear is well, then there's you know,
there's going to be a small minority of people who
are going to be doing all of the production. Therefore
they're going to have all the wealth. And that's just
a fundamentally fundamental misunderstanding the way the difference between wealth
and money. It's like, Okay, the same thing happened with
like the discovery of petroleum and electricity. It's like that

(20:25):
drastically increased the wealth of everybody around the world. And today,
if you wanted to live the lifestyle of a king
in the seventeen hundreds, it takes like two hours of
work a day at a minimum wage job, and you
can live the lifestyle that a king lived in the
seventeen hundreds. It's like, okay, nothing, no petroleum by products,

(20:46):
no electricity like, no running water like the it's there's
it's very very easy to achieve the level of wealth
that was available from a monetary perspective today. The difference
is that people both like you to your point, we escalate,
we increase our expectations, and so therefore we drive ourselves

(21:07):
to produce more so that we can have more. And
so the amount of wealth that will be shared will
be as a result of things that people want and
need becoming cheaper and so they're more easily affordable. And
then the other point you made about who who's going

(21:28):
to lose their who's going to lose their jobs, and
who's in trouble, it's like at video editing is a
big thing for me, it's you know, it's a big
expense for me. And I could sit literally any more
on down and teach them how to use a video editor,
and it just takes training. I say, click this button,
click this button, click this button, and after a couple
of weeks, literally anybody can do that job. But now

(21:51):
there's a program that can do it for me, And
instead of having to spend five grand a month to
produce ten tiktoks, today I can spend seventy dollars a
month and have the computer do the exact same thing
for me. And it's like, basically, in my opinion, it
seems like if I can train a person to do it,
eventually a program will be able to do it because
the program can get trained just like a person. But
anything that has to be done that can't be trained,

(22:14):
you're not going to be able to train a computer
to do it because you can't train a person to
do it. Like you to your point, the thinking, the creativity,
the writing, the reasoning, Yes, yeah, something that takes an
actual person, uh venturing beyond what is you know, uh
trainable from a mid of home manager.

Speaker 2 (22:33):
Yeah, well and this is this is exactly like you know,
the what a bit of world it is if you
know what you're doing can be can be done from
a place of creativity, interest, intrigue, and curiosity, right Like
who the fuck wants to sit down and just fucking

(22:54):
push buttons all day? Like well, maybe, like you know,
I want to do something interesting, and you know, the
more we can outsource these menual touts the better. So
you know, I think in that sense, I mean, that's
been and always will be. I think, in my mind
the proper AI pitch, so to speak. And I just think,

(23:20):
you know, I don't see how that's different from what
the AI pitch was twelve months ago, eighteen months ago,
or three years ago or five years ago, when they
were talking about AI, you know, automating things away. I
think just just the flavor of it. This time, I
think people got caught up with chat tipt and mid
journeys seeming creativity. And this is the thing, is none
of that stuff is creativity. It's just the law of

(23:40):
large numbers. It gives you the perception of creativity, but
it really isn't. There's nothing new that comes out of
any of these language models or mid journey or anything
like that. It's just the new combination of stuff. And
you know, someone might argue, well, you know, all human
creativity is a new combination of stuff. But that is
true to a degree. But there's there's something. I mean,

(24:03):
you even see it in like, you know, when you
get mid journey to create a really cool image or
something like that. I was doing it the other day
for some Alexander the Great imagery and you know, trying
to get like a Alexander charging into battle with his
with his Soissa up, and I just couldn't get the
fucking Soissa, Like it was just you know, this random
sword like poking out of his head or like you know,
pointing the wrong way or something like that. It was

(24:23):
because it's like the machine doesn't know, so it's just
like slapping pixels together in such an order that tries
to approximate, you know, the words that I'm have in
a particular order, and you just got to keep playing
and fucking with the words until you kind of get
them in the right order. It's the right thing. But
you see that the image itself almost like lacks like

(24:45):
lacks essence or lacks intent, like when when an artist
actually paints something or creates something, there's an actual intent
around where it's where things are placed. It's not a
it's not a random like you know, monkeys typing on
a typewriter producing the Bible right like it's it's there's
there's a difference between sentience and probability, and I just

(25:09):
I'm not convinced that you know, sentence is just going
to magically emerge from the circuits, at least at least
not now. We're definitely like, if that is how sentience emerged,
I just don't see it being anywhere close to where
we are at the moment.

Speaker 1 (25:28):
Yeah, yeah, it makes sense. So if those two things
are not the real fear? What what? What is the
real distraction?

Speaker 2 (25:38):
Is?

Speaker 1 (25:39):
What is the real thing that that we should be
afraid of here?

Speaker 2 (25:42):
So I think, you know, maybe instead of being afraid
of it, I think we can be vigilant of and
do something about it, because I mean, at least the
people listening to this for example, or at least the
people who want to be a little bit more awake
and the agents of their own life instead of being

(26:02):
you know, the classic lemming who gets pushed around with
whichever way the wind blows. But when we think about
the internet, right, so when it first came out, when
the Internet emerged, you know what was it? It was
this kind of this universe of information that anybody could
you know, create something and we could go, we could
find it, we could search for it. But as the

(26:23):
Internet grew, it became harder and harder to find stuff.
So you know what did you have you had the
rise of the search engines?

Speaker 1 (26:29):
Right?

Speaker 2 (26:30):
And obviously we know a little company called Google that
figured out a way to index that stuff really well,
you know, they crawled the whole Internet and made the
stuff that you were looking for more discoverable and more relevant, right,
And what happened over time? Like I still remember the
early days of Google, like I used to go to
page two, three, four, five. I don't know if you

(26:52):
remember those days, right, Like you used to go back,
used to like search for stuff. I mean, tell me,
who the fuck goes past the first page of Google?

Speaker 1 (26:58):
Now?

Speaker 2 (26:59):
Nobody as the first couple of results exactly nobody. So
what's essentially happened is that the way people like you know,
how we perceive the world, you know, is a function
of what we see, right, and like the glasses we wear.
Essentially it reminds me of sort of like Plato's cave,

(27:20):
you know, the allegories of you know, the shadows and shit.
It's like, you know, what you see is what you
perceive the world as. So, you know, the way we
get information on the Internet now is what Google tells us.
Like that's truth, that's reality, that's what we know, that's knowledge.

Speaker 1 (27:37):
Right.

Speaker 2 (27:39):
The same thing with social media, right, like the algorithms,
what they feed us is what we perceive as true
as knowledge, et cetera. Right, So what I'm seeing and
this that I think this is definitely well maybe I
shouldn't say definitely because it's too much of a strong word.
What I see as very likely the step change or
the new the zero to one moment for aar I

(28:00):
hear is the language user interface. Is this idea that
like you don't have to search for shit anymore, you
just ask you language model, Hey, you know what what
was this? Or like how does this work? And I
do that myself if my wife does it all the time,
Like she asked me a question, I'm like, they've just
asked chat ChiPT and she'll ask them to be like, eah,
you know it said this, and I'm like like sometimes
there'll be things that she asked me about. I'm like, yeah,

(28:21):
that's fucking scam. Like you know, things like you know,
the eat a balanced diet, like you know, start with
your granola in the morning, and you know, like pasteurized
milk or whatever. You know, like stuff we know that
is mainstream crap. Like you know, you'll you like people
like us will know that straight away. But for the
average Norman, that's the truth, right. So as we move forward,

(28:41):
my my sense is that people like the language model
will become the new user interface. You just chat and
ask and what it tells you will be your your
definition of truth. So if that's the case, well I
guess people can probably tell where the real problem is
going to be, is that upstream of that if you

(29:03):
can control what the right language or the right output is,
you know, and we're already like there's a huge talk
about AI safety. When they talk about AI safety, what
are they talking about. They're talking about what the AI says.
When they talk about approved language or safe speech. Like

(29:23):
whenever you hear like any sort of bureaucrat's regultary agency
say the word safe, just fucking run the opposite direction, right,
Like you know it's for your safety, all right, bro,
we know exactly what that means. So, like, you know,
as soon as and this is what triggered me, Like
when I was going down the rabbit hole and I
just started seeing all this stuff, like we need regulatory
bodies for safe and responsible use of AI. We need

(29:45):
to be you know, like as soon as I hear
safe and responsible man like my skin crawls, like Spider
SAIDs straight away, right, and you dig a little bit deeper.
What are they doing? Well? You know, now, particular AI models,
if you want run them, you need to ensure that
they are run through toxicity filters. And these toxicity filters
you know who's behind them, Ah, well, sprid Surprise. It's

(30:07):
like Google and this and that. And they basically filter
for particular words, particular language structures, particular sentence, complete, and
all this sort of stuff. And they basically guardrail these
models into delivering you an overton window of acceptable discourse.
Now you can try and prompt your way in and
around and all this sort of stuff. But every time
you do that, you actually help them harden their system

(30:28):
a little bit more because they learn from that. They
you know, they.

Speaker 1 (30:31):
Restrict those of their weaknesses. Yeah, exactly.

Speaker 2 (30:34):
So what you end up having is you get this
kind of like this age of approved knowledge. And if
we end up in a world, for example, where we
have one or two or three language models that are
the approved language models that everyone's allowed to use, well,
then what do you think is going to happen?

Speaker 1 (30:50):
It is like you're.

Speaker 2 (30:50):
Literally playing a game of inception. You can just like
essentially just make people think whatever you want them to think,
because that's how they get their knowledge, and then that's
what they regurt to take out into the world. So
that's where I believe the real danger is. And I
mean it's like the kind of people that have been
kicking and yelling and screaming about AI safety. Like when

(31:12):
you dig into the details that they on the surface
they say artificial general intelligence, you know, is the biggest
threat to humanity. It could obsolete us, it could wipe
us out. They tell you nothing about like what general
intelligence means, how that's even possible, like any of that
sort of stuff. And then when you look at the
action steps, the action steps are we need to put

(31:33):
regultary bodies in place to regulate speech, to regulate discourse,
to ensure that these language models are safe, and to
ensure that we have responsible development moving forward. Like that's
straight up controlled language, like it's it's like it's all
well and inception like in one. So that's to sum

(31:55):
up where I think the biggest threat is.

Speaker 1 (31:57):
Well, and that's that's that like perfectly fits with you know,
the things that some of these people are doing outside
of artificial intelligence as well, like Sam Altman, the Open
AI guy number one, when he's in front of Congress.
Congress is saying, you know, you're making this dangerous thing,
and he's like, that's why I'm asking you guys to
regulate it. It's like, that's that's odd number one. And

(32:19):
then number two his world coin, the new cryptocurrency number one.
It's named world Coin, and it's got orbs that scan
your eyeballs. It's like, you couldn't imagine a more dystopian,
Like it sounds fake and when you dig into it,
it gathers all your personal data for your own protection,
and it promises privacy, but not from governments. It's like, literally,

(32:44):
we're gathering it so that governments can use this to
enable cross board, keep you safe, enable cross border payments,
but also make sure there's no money laundering aka shut
off payments of anybody that we don't agree with what
they're paying for. And it's just this dystopian nightmare, and
it fits perfectly with controlling thought from UH from their
artificial intelligence model. It's just it's not it's not far

(33:08):
fetched at all. It's it's it's literally when you read
between the lines, with these people are pushing.

Speaker 2 (33:13):
For totally totally, So you know, with with that in mind,
like the question isn't and this this is this sort
of ties into what I've been working on like for
the last six or Jesus almost been eight months now
is you fight this by building alternatives? Right? So, like
if we think about what is bitcoin? You know, we've
spoken about bitcoin on a number of occasions. So bitcoin

(33:34):
is an alternative. It's an open source alternative that anybody
can use, anybody can get access to, anybody can run locally,
and it happens to be a network that supports the
most important technology humans need to subsist, which is money. Right,
So m hmm, that's what that is. So if we

(33:55):
look at the AI, think well, what do we need
to combat this? And luck enough, this is actually happening
in the AI space pretty broadly. Is like you need
open source models, you need options, you need other models
that can compete with these primary models. Now you know,
the AI does fundamentally benefit from economies of scale, so

(34:16):
it is very difficult to compete with the the chat
Gibt's no been ais of the world. Like, I mean,
these guys are sitting on how many billions in the bank,
and you know, they can afford to spend gazillions of
dollars on the best engineers, et cetera, et cetera, and
like data gathering and all this sort of stuff, but
you know, they are fundamentally trying to build I mean,

(34:36):
I part of me actually hopes that they're they believe
their own bullshit when it comes to AGI, because at
least what they'll do is they'll spend they'll spend billions
of dollars trying to build a you know, tin man
that is never going to actually work. So you know
that that might be like poetic justice in the end,
because I mean a lot of these guys, you look
at their model of the world that they are truly

(34:57):
the the yuval Harari brain and at you know, like
the Brian Johnson I'm going to live to five hundred years,
like and I've looked like a fucking like the guy's
literally transforming into a fish before our very eyes, like
you know, pale skin, like it's just retarded.

Speaker 1 (35:12):
So he looks almost real, dude, Yeah, exactly, he just
it's so weird.

Speaker 2 (35:16):
So like, I yeah, there's probably some poetic justice in
all of this, and a lot of you know, just
money just being spent on dumb shit. I think there's
you know, there's also a mix of you know, these
narratives being pushed. Like you look at who's really pushing
the narratives and what is it. It's it's Nvidia, Microsoft, Google, blah,

(35:39):
bah blah, all these guys. And you know, funny enough,
just just follow the money. Classic is all these startups
are joining which are innovating, and they're sucking talent from
all sides of you know, the world, and they're raising money.
The money is coming from vcs, and the vcs they're LPs,
are Google beres, you know, n video ventures, all this

(36:02):
sort of stuff. And what are the startups or spending
money on. They're spending money on graphics cards and compute
and all. So the money is just flowing right back
up to the same dudes, and you know they're just
concentrating their positions, you know, just based on a new narrative.
So you know, it's there's all sorts of weird sort
of shenanigans going on, you know. And that's not to

(36:24):
say that you know, these tools, these new technologies are
not going to be useful. There's there's obviously some stuff here,
but anyway to kind of went off on a tangent
there to sort of tie it back is the way
to combat this stuff. The way to combat anything is
to build alternatives and make the alternatives appealing enough so
that other people want to use it. So, like what

(36:46):
we're doing now with like the Spirit of Suatoshi project
is we're collecting and curating all of the bitcoin data
in the world, which is a huge corpus of data.
Everything from Austrian economics, from classical literature, from like conservative
type of philosophy. You know, you're Thomas Carliles, Edmond Berks
and all this sort of stuff. Like all of this
kind of stuff we're cleating, and we're training a model

(37:06):
from scratch on all of that, and we want to
give people it's going to be a far more narrow model.
It'll be much smaller than chat chipity, won't be as
versatile as chatchiputy, like won't be able to write you
a poem and shit like that, but it'll be functional.
Like let's say you want to get an idea of like, okay,
you know, what's something that you know, Thomas Carlisle, you know,
would have disagreed on with you know Nietzsche for example.

(37:30):
You know, it'll give you something like functional, something useful.
You know what's what like what would Natsue have thought
about bitcoin? You know what would like Alexander sols Nit's
and thought of ethereum, I don't know, like whatever, Like
it'll it'll give you some interesting stuff. And I think
somewhere in there there might be some utility for people
who want to think outside of the box and not

(37:51):
conform to the mainstream. And what this might show the
seeds for is language models around health. Like I'd love
to do like a repeat doctor mccola language model, like
something that you can ask like, hey, you know what
should I You know, I'm twenty five, you know, I'm
looking to train for this, Like this is what I want?
This my profile, Like what should I eat? When should
eat it? How should ead? What should I look outlah

(38:11):
blah blah, Like just more more alternatives that are more
narrow but more suited for people that don't want to
adopt a mainstream narrative. And I think that's it's always
been the case, and I think that's how we combat
this sort of stuff.

Speaker 1 (38:27):
Is is the bitcoin transaction data like the blockchain, is
that information that this is pulling from as well? Or
would that not have any utility?

Speaker 2 (38:40):
I mean not not really any utility now, Like I
guess what you could do, for example, is you could
train a language model to query the bitcoin blockchain for
particular data or blockhird or transactions for example. And what
you could maybe do is you could build a a
men pool or or you know, blockchain transaction assistant. For example.

(39:03):
You might say, hey, you know I'm looking for you know,
this this transaction on this day, you know, can you
pull it up for me? And you know, maybe a
bitcoin type model could go and like find the precise transaction,
you know, give your list to hey, is it one
of these ones? What are you looking for specifically? Blah
blah blah. So so you know, maybe there's utility there,
like in kind of trying to you know, make sense

(39:26):
of what is actually important data on the bitcoin network.
But you know, would you do that as an individual?
You know, less likely that might be something that's an
enterprise tool. For example, you know, a company that wants
to understand money flows, movements, you know, do chain analysis
stuff like that, you know they might use something like that.

Speaker 1 (39:43):
So Essentially you are well number one, you can anybody
who who knows how to do this can have a
language model trained on any data set. And so in
this circumstance, you're saying, okay, Austrian economics, classical philosophy or

(40:06):
classical literature yet, and you're you're taking that so that
anything from there people can be creative and say, okay,
here's here's something useful that I can pull out of that.

Speaker 2 (40:22):
Sorry, I just made it myself. Fricking dog's bucking yes. Basically,
so the the.

Speaker 3 (40:28):
The idea is that there's a big misconception about language
models that people think, Okay, these language models are trained
on all this data and they know the data.

Speaker 2 (40:41):
It's actually not how it really works like that, like
a language model doesn't know ship. It doesn't know that,
Jordan Peterson said, x or Nia said y or so,
she said, z right. What what like what the training
does is it's basically a game of guess the next word,
and it's very complex, like you know that. That's kind
of like an easy way to summarize it so people

(41:02):
can conceptualize it. And when you're feeding a model particular
kinds of data, there is a language style like when
you read Austrian economics, you kind of know that it's
Austrian economics, Like you know, Mesas and Rothbard might have
their own sort of linguistic style, but the points they're
making are generally similar, right, Like if you read the
Communist manifesto or something from Marx or Angles or something

(41:25):
like that, like the general points the way they string
words to get sentence everything that the essence of what
they're saying is different. Right, So when you're training a model,
you know, you're training it around the probabilities of words
and the weights and the biases, around how words and
sentences and everything is structured. So the model doesn't actually
know anything, but when you ask it something, it will

(41:46):
it will create a sentence where the structure of the
words and the structure of sentence is such that it
is going to sound like an Austrian economist or a
Bitcoina or something like that. And you know what you
end up getting with these, with these probability machines is
you might actually get a fucking random insight out of it,

(42:06):
Like you say, hey, you know, tell me something. You know,
as I said earlier, like you know what Nietzsche and
Thomas Carlisle would have disagreed on that safety in you know,
said like you know, and and it would like string
this shit together in such way, and you'll be like, fuck,
actually that's pretty good, or you'll be like, okay, no,
that's just dumb. And you know that that other part
is called like hallucination for example, that's why you know

(42:28):
when people I'm sure you've heard the term like chat
chipety and stuff hallucinates like it makes up facts. Have
you heard of that? Yeah, yep, yeah, So it's not
it's because chatchipe doesn't know that it's a fact or not.
It's just strung together words probabilistically speaking, that actually make
you know, I hate to use the words sense, but
like the highest probability of these words being in a

(42:52):
row come out and chatchipity says it with like what
sounds like certainty, but it doesn't know that it's wrong
or right, and you, as a human might know that
it's wrong and be like, well, that's not the fucking quote.
That's bullshit. But yeah, chagib is not drawing from a
database and you know, copy and pasting the quote. It's
just stringing the words together in such a way. So

(43:14):
anyway that that you know, at risk and getting a
little bit technical there, Like the utility in an alternative
model is having you know, these you know, a different
linguistic style. Now I will mention one more thing here
is you can train a model to query a database

(43:35):
so that you can embed facts in language. Right, And
this is where you could get for example, uh, you
know the the example I gave you earlier about these
language user interfaces where you can speak to a model
and it tells you stuff and it's genuinely fact, but

(43:56):
it's also within you know, the linguistic style that is approved,
and this could become the new way. We don't use
search engines anymore. We just use our knowledgeable assistant, right,
our intelligent friend. So we'll do the same thing with
spiritual stoci. Is that not only are we training the
model on all of the data that we're cataloging, but

(44:16):
all of the data that we're cataloging we are putting
into a massive repository that the model can then query later,
so that it is not only linguistically of a you know,
nuanced style that is more like our model of the world,
but it can also query that stuff so that when
it delivers something. There is fact mixed with style together basically.

Speaker 1 (44:41):
Yeah, yeah, that makes sense. Yeah, it's really interesting because
it's these seem to me, and you can correct me
if I'm wrong about this. These seem like foundational tools
that we were not. It doesn't seem apparent yet how
uh how how they how how much they'll be used for.

(45:05):
It's almost like it's almost like inventing, like like we
have a tractor before, before we have farms. It's like
and somebody's got to come along and they're gonna be like, hey,
we can actually use this to make a farm.

Speaker 2 (45:25):
That's a very good analogy actually, because like you know,
we we raise a little bit of money for this,
and like you know, I told investors, I'm like, man,
this is this is a fucking experiment like we have
we don't know what like because they're like, what's the
commercial application? I said, well, we could have commercial applications.
We have like bitcoin onboarding assistant or like a bigcoin
educational assistant. You know, maybe we could build like a uh,
you know, an alternative to Coursera, which is about like

(45:47):
training people to be more sovereign, and you know, homeschooling
and health and all this sort of stuff, but I said,
you know, the caveat was like, we don't actually fucking know.
Like the first step is, can we build an alternative
language model that is not god rail, not toxicity filtered
and all this other crap, and it's trained on a
specific corpus that we've selected and we've curated, and then

(46:07):
let's see what problems we can solve with that. And
that's kind of like the next step. So right now,
like we're still in the process of training. We're basically
in the process of curating, cleaning and creating the specific
data set and training simultaneous that we're doing it in a
parallel and we're actually built like a might just mention

(46:31):
this here is we built a cool little app called
kind of like well, we haven't given it a name really,
but it's like you can help train spiritus associate. And
this is something we've integrated bitcoin quite deeply, and so
you can go create an account like you can do
it with just by scanning a Lightning wallet or just
logging with Noster email. And we're calling a proof of
knowledge in the sense that if you've got like some

(46:53):
knowledge about bitcoin, of store economics or anything like that
you can go in there and you can answer questions
as if you were the model, and we use that
style of language and that information to then train the
model so that people from around the world can actually
have their input. You can also help like clean up

(47:14):
some data, so we've got all this programmatically clean data.
So like without getting into the weeds here, like when
you train a model on a book, like let's say
the Bitcoin Standard, you don't just like put the whole
book in there. You have to break it up into
a specific format. You have to turn it into question
and answer pairs based on every single paragraph and all
that sort of stuff. So some of the stuff in
there might be irrelevant, Like we went and transcribed a

(47:35):
bunch of podcasts, for example, and then we broke it
up into chunks and we are before feeding it to
the model, we do like a programmatic cleanup to try
and get rid of like stuff that's irrelevant. Like you know,
on a podcast, you say what I ate for breakfast,
what I did last week with my wife, whatever, Like
all that shit's irrelevant, But you can't programmatically get all
of that out there, so you need some human kind

(47:56):
of assistance. So you know, people are in there helping
us kind of clean that ad just questions, make it
more relevant, this and that and and yeah, earning earning
SATs for it. We've got mirandoms from like Philippines, from Salvador,
from Europe, from America, from Australia, like all this sort
of stuff, and we're able to do that, which is
really interesting because we can pay them SATs from anywhere

(48:17):
around the world. They can be completely anonymous. Then I
have to like, you know, it's it's beautiful and this
is something like for example, people like open ai can't
really do because they're paying everyone in dollars. They're doing
everything kind of like the old school way. So yeah,
we've we've got some interesting advantages there. And it's like,
you know, I will say, if anyone's interested in that

(48:38):
sort of stuff and they want to earn some little
you know, sidecash and want to be involved and actually
help be part of the solution, you know, they can
go check out spirits soocial dot ai and there's there's
a little train the model button that they can they
can click and they can get participate. But like, yeah,
this is it's a big science experiment project, but I

(49:00):
think it will become a utility. As you said, it's
like we're building a tractor before there's really any farms.
But I think I have this sense, this instinct that
there's something here. We don't really know what the application
is going to be, but I get the sense that,
you know, having an alternative to chat GPT is going
to be immensely important in the coming two to five years.

Speaker 1 (49:23):
Yeah. Yeah, well I'm already starting to you know, my
mind is racing with you know, potential ways that this
could be applied, like number one, like looking at markets,
It's like, all right, well, given given what's going on
right now, you know, I could say, you know, what's
going on monetary policy and physical policy and things like that,
like what would what would you know? Rothbard and means

(49:45):
this and Hi X say, is is likely to be
the result of this economically speaking, or even even help
with you know, trying to put together monetary policy. We're
seeing kind of a movement around the world, different countries
move get away from Kinesi economics, and it's like, okay, well,
we don't have a massive body of economists around the

(50:06):
world that are trained correctly, so you know, looking at okay,
you know a lot of these guys are dead, So
what how how do we put together some you know,
good monetary policy from a national perspective that that could
be another potential solution.

Speaker 2 (50:21):
So totally, really cool, Totally there's some Yeah, it's like
once you this is the beauty of tools, right, It's like,
once you have a tool, you you know, you can
kind of come up with ways to use it and
then you know, the market adapts, like and this is
where stuff like open source is really important. So we're
going to open source the whole tool and then allow

(50:42):
people to figure out you know if I mean if
they've got the compute power at home to run it locally.
If they don't, you know, they can just use use
it through our portal online and you know, they can
pay some sets for it if they you know, want
to use it you know as a power user or whatever.
But like, yeah, it's where balking on something big, and
it's gonna be one of those things where we look

(51:02):
back in three to five years from now and be like,
you know where somewhere, Like I get the sense we'll
be somewhere in three to five years that we had
zero ability to predict right now, Like I have no
fucking idea where we're going to be, but I think
we'll I think it's a journey worth on the taking.

Speaker 1 (51:20):
And if anybody wants to join along and help be
part of the solution and get paid some SATs, the
website is Spirit off Satashi dot Ai.

Speaker 2 (51:29):
Yes, correct, Okay.

Speaker 1 (51:31):
I'll have that in the that link in the show
notes as well for anybody who wants to copy and
paste it or click on the link. Well, I don't
want to take up too much of your time today here,
but I really appreciate you coming on the coming on
the show and talking about this really exciting. I think
at the end of the day, anytime we see something

(51:51):
wrong with our world, it is it is always always
better option to choose to build a solution rather than
just complain about the way things are. And you're definitely
doing that here, So I appreciate it. Thank you for
doing that totally.

Speaker 2 (52:06):
Man, Thank you so much for having me on and
sort of helping get the word out with all of
these things. Every time man means a lot. And yeah,
I think, as you said, built, you know, we complain
or we built. That's it, like, yeah, one of the two,
and you know, we rather do something about it or
we you know, wait for somebody else, and I try
and try and make that the you know, the theme

(52:29):
of my life. So we'll see.

Speaker 1 (52:32):
Well, thanks so much, looking forward to seeing where this
thing goes over the coming years and it's spirit of
Satashi dot Ai. Thanks so much for joining. We'll talk
to you again soon.

Speaker 2 (52:39):
Thank you, Joe. Take care, man,
Advertise With Us

Popular Podcasts

24/7 News: The Latest
Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.