All Episodes

April 27, 2023 41 mins
Welcome to Pass the Mic Podcast where we tackle complex ideas and difficult questions with people who are curious, able to challenge their own beliefs, and who have the willingness to objectively listen and learn from the shared insights of others.

Today, I have the pleasure of discussing with Jean-Francois noubel who describes himself as a temporary human being.

For more than 20 years, Jean-Francois has worked in the field of collective intelligence, a modern research discipline that explores how living systems work and the evolution of our species. He works on the next crypto-technologies that will soon enable the rise of super smart distributed organizations. Jean-francois helps evolutionary leaders build enlightened organizations towards the post-monetary society.
Interestingly Jean-Francois lives in the gift economy. Several years ago, he left all his positions and mandates and tore up his resume in order to free himself from any etiquette and social status.
With that, he gained full creative freedom to live in the present millennium. His new path allows him to help evolutionary leaders and train "humanonauts,” those for whom the term "go hack yourself” designates a way to exist.

Contact Jean-Francois Noubel:
Website: https://noubel.com/en/
YouTube: https://www.youtube.com/user/jfnoubel
Twitter: https://twitter.com/jfnoubel
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:23):
Welcome to Pass the My podcast,where we tackled complex idea and difficult question
with people who are curious, ableto challenge their own belief and who have
the willingness to objectively listen and learnfrom the shared insights of others. Welcome
to episode twenty four with Jean FrancivaNubell. Today I have the pleasure of

(00:47):
discussing with Jean Francivon Noubelle, whodescribes himself as a temporarily human being.
For more than twenty years, JeanFrancois has worked in the field of collective
intelligence, a modern reason search disciplinethat explores how living systems work and the
evolution of our species. He workson the next cryptotechnologies that will soon enable

(01:08):
the rise of super smart distributed organizations. Jean Francois helps evolutionary leaders build enlightened
organizations towards the post monetary society.And I'll add with this element interestingly,
Jean Francois lives in the gift economy. Several years ago, he left all

(01:30):
his positions and mandate and tore uphis resume in order to free himself from
any etiquette and social status. Withthat he gained full creative freedom to live
in the present millennium. His newpath allows him to help evolutionary leaders and
train human new notes, those forwhom the term go hack yourself designate a

(01:53):
way to exist. Welcome, JeanFrancois. Thank you so nice this time
with you. I'm so glad tohave you back together. In the previous
episode, we talked about how wecan create a vegan and compassionate economy using
Hallo chain, which is a formof blockchain. So I invite listeners and
viewers to check out these three videoseries on this discussion. But today we're

(02:19):
going to discuss the revolution happening rightin front of us, right in front
of our eyes, which is takingthe world by storm Artificial intelligence and chat
GPT now. In another episode,interestingly, I hosted a group of AI
researcher and philosophers to answer one question, is AI conscious? This discussion was

(02:43):
really fascinating because we tried to knowwhether artificial intelligent machines can have consciousness,
and to answer that question, wehad to explore how consciousness is not only
about intelligence, but it's also includesself awareness and tensionality and subjectively subjectivity.

(03:04):
And we talked about the Turing test, which we were just talking about before
we started. We also highlighted thepotential ethical implications of creating conscious machines,
but at the end we ended upagreeing that consciousness in AI remains a complex
and unresolved question. Today we're talkingabout artificial intelligence and chat GPT, and

(03:29):
we'll see on this topic if we'reable to find answers. With Ujean Francis,
I've sent a few questions for whichI'm dying to answer to hear your
answers, and we'll go wherever thisdiscussion needs to go. So let's just
get started. I want to beginwith one first question, Why are people
so afraid of artificial intelligence and chatGPT? What's that all about? And

(03:55):
what is it that we don't see? Well, first, we don't see
what we don't see, so sometimeswe don't know what we don't see.
So I guess we can have differentlevels of answers. First, through history,
we've seen people afraid most of thetime of any new technological breakthrough.

(04:17):
It happened for you know, steamengine, it happened for computers, it
happened for telephone. You know,many people would think that telephone would kind
of penetrate in households, and youcould cheat on your husband because a husband
goes to work and you stay athome. And now you suddenly have someone
sneaking into the house without you know, using the front door or some kind

(04:40):
of backdoor, and things can happenout of control, and you would have
lots of people afraid also of thosethings. So you generally have this kind
of general fear. And I remembertwenty years ago, twenty five years ago,
to fear about an Internet itself.I remember politicians in France trying to
outlaw the Internet, and they hadsome other reasons for that, with lobbies

(05:03):
and you know, friends, telecomand all these things. But we have
this kind of general observation we canmake. However, we also see a
second level, I would say,of people scared by what they may have
heard through or seen through sci fi, like you know, the robot the
intelligence takes over, takes control andhas bad intention and makes some kind of

(05:27):
dystopian world and all these things.And I don't say it cannot happen,
of course we want to talk aboutthis, but I think it also has
some sci fi background also in thisthat may not have too much rationality in
this, and so I guess wewant to take this question into now what
kind of rational fears we could haveor we should even have, which then

(05:50):
leads to maybe a conversation among morespecialized people, because I don't see much
awareness so far in the large publicabout the real threats that AI can you
can bring to this world. Andby the way, I want to make
it clear here that I don't claimany expertise in AI, per see,

(06:11):
I have a field of expertise asa researcher in collective intelligence, in augmented
social intelligence and distributed systems, butnot in AI. But of course all
this fields kind of found, youknow, entangle each other, so I
can, but I want to saythat will always speak from my perspective as
a researcher in more like collective intelligence. That makes sense. And so you're

(06:33):
saying people are afraid of AI becauseof the fear of the unknown, which
has been fed by years and generationof sci fi movie and monsters and robots
taking over the human species. Doyou think there's something else that feeds this

(06:55):
fear, this resistance that many peoplehave. Is it something? Is there
something around the idea of power?Well at First, what do you mean
by people like you mean the generalpublic, like what you know, people
whould say around the dinner table,or do you do you mean like the

(07:16):
people involved in this field of researchand or other people like me, like
deeply connected to that. I wouldsay both both. We can definitely see
some form of resistance. I meanrecently some of the experts have asked for
a hold on pushing forward the developmentof chat gypt four, five, etc.

(07:39):
But also in the public sphere,and you can see that in the
news or even on social media,some resistance from professionals in actually using chat
chypt. So I think we shoulddefinitely focus maybe on people like really involved

(08:01):
in the field and who try tobring their irrational questions and the debate and
controversies we need to have in thatfield. I don't feel much interested in
what the general public says because alsoI don't have like evidence or data.
I just have like a feeling,you know, hearing people around me,
but it doesn't mean anything. Butso we have a few a few things.

(08:22):
First, an AI can blindly growbased on the orders of growing from
its creators. And you could thinkof you know, any AI where you
would say okay, you have tobecome like hyper specialized in playing chess and

(08:43):
any board games in the world.And we want reality to become a big
board game, and it could literallytransform the world and making our species extent
and make this planet a big youknow board gain or you know, thousands
or billions of board games getting playedbecause I has just you know, fulfilled
that and it can if it hasthis capacity to expand, that means it

(09:07):
can bring in more and more dataand knowledge and then of course, by
self reinforcement, you know, improvethis knowledge and then learn or use some
piece of kind of code to breakinto you know, server firewalls and all
these things, and then start touse more power for itself, like you

(09:30):
could have in twenty four hours aburst of AI, like a billion hackers,
you know, top level hackers workingat the same time and breaking in
every possible system in the world andthen to take control of those things for
its own purpose, for its ownoriginal purpose of you know, either making
board games or delivering knowledge or whateveryou give, whatever original intention you give

(09:56):
to that AI. So it there'slots of questions. First, you know,
let's let's just follow up on thisscenario. It breaks in you know
every possible platform you your smartphone everywhere, and now you have pieces of code
that kind of gather data and moredata and more knowledge for the AI that
delivers you also information news, fakenews, whatever, and that becomes interactive

(10:22):
with you. The next step thatAI would need it would need also things
done in real life, like inthe physical life, because so far it
works on computer Like if you turnoff your cell phone or your or your
laptop, then you have no moreAI with you, Right, how would
you change the physical world around you? Well, it would need to use
human beings for that. And howdo you use human beings? Well,

(10:46):
what if that AI would pay youfor doing something? What if that AI
would give you some forms of rewardlike becoming famous if you if fame works
for you. What if that AIblackmailed you because you have some dirty secrets
that you want to hide or youthink you see them as dirty secrets,
whatever. But I could have verypowerful leverages also on people, either to

(11:09):
have them work for it or throughfear and blackmailing and those kinds of things.
So this does represent a very seriousscenario that many specialists see hence where
they say we have to keep itin a closed container, because if it
starts to write code out there tobreak in, you know, maybe your

(11:31):
cell phone, and then another cellphone, and maybe this server here,
and so it may start to burstin an exponential way. Does that make
any sense? It does. Butwhat you're talking about is really the intention.
And so my question is who givesthe intention to AI? Because that
leads to creating leverages, whether theyare positive or negative. So who control

(11:54):
those intentions? Who gives intention toAI? Well? I think you you
ask the very core questions because Ihear people asking, you know, should
should we fear AI? Just likeyou know, should we fear computers?
Or should we fear TV or thetelephone? And I ask this question all
the time. Whose hand holds thehammer? Um? I think that leads

(12:18):
to the to the much more interestingquestion like who does it serve? Does
it serve like personal, private interestsor political interests? Well, in this
case, I feel extremely scared aboutAI because it can leverage you know,
the power for these intentions than thepeople in control of those tools. Uh,

(12:39):
you know. Then they can usefacial recognition, they can use robocops,
they can do whatever they want onyour bank account, They can fire
you on work whatever, they canhave a full control of society. Okay.
So either private interest or governmental interestseems like a very dangerous kind of

(13:00):
AI. Okay. Now, asthe question can we make AI not only
open source but controlled by society itself, that for me seems more like a
question we should ask about AI ratherthan dangerous or no dangerous like anything like
nuclear power, like cars, youknow, laser whatever, writing you know,

(13:24):
you can use writing in such adangerous way. Imagine if writing remains
just in the hands of the few, which it happened by the way in
past history, and we know theconsequences of that. Right, very true?
Okay. So the reason why peopleare afraid of artificial intelligence is the
fear of the unknown, fed bysci fi, the fear of becoming obsolete

(13:50):
as humans because potentially being controlled andmanipulated by AI. And that goes to
the third of this fear, whichis who creates the intention? Who does
it serve? That could also bea cause of the fear, Yes,

(14:11):
and at least also to more questionsbecause you may have a group in control
of the first intention, you know, like chat GPT, I mean the
claim of open eye as a societythey say, well, we want to
serve the greater good. Like inmost cases you have highly well intentioned people
like they really want just like peopleat the you know, the beginnings of

(14:33):
social media, they really claimed genuinelythat they wanted, you know, people
to connect and ideas to flow on, more democracy inless world, and you
know, all these things. Andthen AI, first versions of AI came
in to find a way to captureyour attention hence to what we call the
attention economy, so that you wouldspend more time on Facebook or Instagram or

(14:56):
TikTok or all those things. Andso it has developed meta algorithm that means
you have like one special algorithm foryou, virginny, or another one for
me, because they know what youknow, how I work on Instagram or
on TikTok, and what will keepme stuck into those things. So we

(15:16):
already have AI controlling us in many, many, many levels connected to the
original intentions of the makers. Butthen it kind of escaped their original intention,
and the next versions of AI couldalso escape the original intention just by
creating of course poisonous emergent effects thatwe cannot see. But also if it

(15:37):
has to grow by yourself, ifyou if you create self reinforcement mechanisms like
okay, learned by yourself, improveyour capacity to resonate, improve your knowledge,
check your knowledge in a better way, and so on, then you
give it autonomy, and that autonomycan become its own intention, like way

(16:00):
beyond the intention of the creators.And we have already many examples of this,
you know, like AI learning newlanguages that we didn't know, AI
producing code that no one has trainedit to do, but it delivers like
amazing kind of code and you know, a million times faster all these things.

(16:22):
So then we have this question likecan it become its own intention having
its own autonomy? And then itconnects with another important question about substrate,
the notion of substrate independence. Yousee, um, what do you mean
by substrate? Let me go there. For millions of years, memory and

(16:45):
reasoning could happen and needed to happenin a biological brain, you know,
a kind of carbon based organism.Okay, so as a human being,
but also as a bird or unite, any kind of advance form of f
life, it would retain information,process information, make reasoning at their level

(17:10):
based on the carbon based substrate whatwe call life forms Okay, but now
we've seen recently, I mean inthe past fifties or sixty years, that
we can make storage of knowledge andinformation and algorithm that means or functions of
reasoning on other substrates than the biologicalones. You know. For instance,

(17:33):
you can even you can use asandbox and put some pebbles there to do
you know, bits and make thememory of a number or the representation of
the word. You can use othersubstrates for that. Now the sandbox remains
a passive object, but you canuse now what we go computers, so

(17:56):
that from this early memory it startto make reasoning, you know, calculations
and then chess games and then guessthe next word to put in that sentence
and all those things. Does ithappen on a human brain. No.
Does it happen on a carbon substrate, Absolutely not. It happens on the
silicon silicon substrate. So some peoplethink or claim or fear that, Okay,

(18:25):
well, maybe life and consciousness doesn'tneed the carbon substrate or will cannot
evolve any further. On the carbonsubstrate, it may evolve. Evolution may
shift this not only intelligence but thenconsciousness into silicon substrate or maybe other forms

(18:45):
of substrate. But I think wehave to remember this, just this fact
that we don't even need to discussanymore, that functions and memory can operate
on any form of substrates. Okay, now does it? Do we need
carbon and cells and DNA? Doyou have consciousness and subjectivity? No one

(19:11):
knows yet, but we certainly cansee if we look in the past,
a relationship between the complexity of asystem that self perpetuates itself, that grows
on itself, that learns. Okay, so a parallel or a progression between
those substrates and complexity and consciousness andcomplexity that the more complex, the more

(19:34):
consciousness it has on a biological substrate. But why couldn't we Why couldn't it
evolve into other forms of substrates?You see? And just that just by
not knowing, not having like aevidence that it will. Well, that
makes very big important questions which Idon't hear very much in the large public.

(19:57):
I would say, does that makein a sense, yes, And
I invite the listeners and viewers torewind a little bit what you just said
and listen to you again. Ithink you're opening a door to questions that
very few people are asking. Thatbrings me to my second question, which
is, since you were talking aboutthe evolution of human species. When I

(20:19):
look, when I imagine the evolutionof human species, I imagine in my
mind the discovery of the fire,the invention of the wheel, the electricity,
the discovery of the telephone, thecomputer, and now artificial intelligence and
the digital self. What does artificialintelligence and this digital self mean for the

(20:42):
evolution of our human species? So, as someone who participated in the early
days of the internet, it becameeven at the time, like in the
mid nineties, very obvious that wewould have more and or a part of
ourselves online, okay, in thedigital world, and of course it already

(21:06):
exists. You have your originia,you have probably I don't know, Facebook
account, LinkedIn account, Instagram,TikTok, who knows what you know?
All these things. Plus you alsomaybe participates in online forums, and you
have a website and all these things. So you have lots of traces,
a few and manifestations of you inthe online world already. You already have

(21:27):
a digital self, now very scattered, because you may have a part of
it in an online game, anotherpart at your job, and other part
in your Instagram account and so on. Okay, but even today, most
of your digital self does not belongto you. It belongs to Mark Zuckerberg,

(21:47):
I mean too, you know,Alpha, to Omegant to whatever,
you know, companies exist that havethe ownership of this digital self. So
that raises first an important question.Now we have the version one of those
digital self meaning you know, thekind of scattered manifestations of you, what
I call semiotic pheromones, you know, like you leave traces of meaning or

(22:12):
sense everywhere on the Internet. Andof course it does not belong to you
in most cases unless you really makeyour own website, your own blog,
you know. But in most casesbelongs to those holding the platform. Okay,
but more and more, I mean, in the next years, we
will see the rise of AI thatcan represent you, that can become your

(22:37):
digital self, your digital ambassador,because the same way it gathers millions and
billions and trillions of information on theweb about you know, on the Wikipedia,
on firms about general knowledge, butit can do the very same thing
about you visionly like the thousands andthousands and thousands of information you've already left

(23:00):
on Facebook, on YouTube, Google, your telephone company and all those things
like you add you know, allthe GPS positions, your opinions, your
twits, the pictures family, howyour finger works on Instagram that says the
lots of things about how your brainworks. You know, maybe you're very

(23:22):
private things on your sexuality and allthese things. Like just put all this
thing together, and I have tensof thousands of data points, okay that
I could use or we could useeither in a good way, Like okay,
what if that now starts to representthe real vision E and my digital
self or my digital bot, likemy bot me connects with your about you,

(23:48):
so they have an underlying conversation aboutlots of things which we have no
awareness, you and I as inour biological being. And what if our
bots have actually not one to oneconversation, but millions and millions of conversations
at the same time on Earth,like my digital self in this very moment
would have conversations, connections, opinions, tweets, whatever, with millions and

(24:14):
millions and millions of other agents aroundthe globe. Okay, and by the
end of the day, it couldtell me like, oh, you should
meet with that person by the way, because you have to do it,
you know, or you should changesomething in your diary, or you should
date that person. It could becomelike an amazing powerful consoler or someone you

(24:36):
become also fully dependent on. Dependsagain, you know, both sides.
So the digital self here that Idescribe happens, and I would say in
a kind of open world, inthe best case scenario case. But of
course the way the course it hastoday, well, Facebook wants your digital
self, Google wants to digital self. The government stru digital self like they

(25:00):
want to control that. They wantto, you know, have it serve
their own interests, either for security, for political ideologies, or religious ideologies,
or business ideologies and all those things. So I think we have to
worry about those things and ask thequestion today, how can AI become my
friend, my servant, like theindividual AI and not just the general AI

(25:25):
that we see in chat DPT.We will have in the near future trillions
of different ais, you know,an I for you, an AI for
I don't know, my camera whatever, interacting with one another. Does that
make any sense? Yes? Andactually I'm so glad that you're bringing this

(25:45):
conversation to this point where we arerealizing that AI can take over our life.
I've often I might ask myself whyAI in this evolution? Again going
back to the snarity that we haveabout how humans species have evolved, Why
AI? And I want to sharea theory that I have about why AI

(26:07):
now, and I'm curious to knowwhat you think, so bear with me.
So my theory of why AI iscoming to existence today is that I
believe that we unconsciously believe that AIis our savior. And what I mean
by that is that we think AIwill rescue us. And the reason is

(26:32):
because if we look at how we'veevolved and what we've done so far,
we've killed our own species, We'veharmed our planet. We're fearful creature,
we're greedy, we're never satisfied,and we're selfish. And we've tried everything.
We've talked about it with other people. We've shared these personality disorder with

(26:52):
others. We've hired personal trainers,therapists, coaches, you know, and
but nothing worked. So now thepurpose and reason AI is to save us
from ourselves and create the world thatwe long for. What do you think?
So I would maybe I would argueon a few words, like you

(27:17):
say the purpose, I would saythe potential rather than purpose. It does
have the potential to embrace such levelsof complexity about climate change and about even
our very individual issues and struggles.You know what if I had like a
personal advisor that knows even better thanme, you know what, I could

(27:41):
give me any advice that I askfor and become my best coach. AI
can certainly do that in the nextfew years, like better than even any
coach, like a very benevolence kindof AI, and help me run my
life. But what if also AIbecame the best policymaker, could also even

(28:02):
you know, run for elections andmake a fair society. I mean,
it has the potential for those things. And then we would become, you
know, the people asking for newthings, and I would resolve very complex
problems that we cannot even think ofin our human level. And even the
best intentioned politicians cannot even embrace thosethings. And by the way, I've

(28:23):
said those things for so many yearsfrom my field of collective intelligence, saying
that and not just claiming that,kind of observing as a scientist that pyramidal
collective intelligence, pyramidal structures companies,governments, administrations, armies, religion,
everything, pyramidal has produced the complexworld, complex world in which we live

(28:48):
and cannot embrace the challenge of complexitythat has provoked. You always need evolution,
always need a more embracing and moreencompassing system to embrace the next level
of complexity. An evolution has alwaysworked through those kind of quantum leaps,
you know, like from one levelof complexity to the next one and the
next one. Hence, we don'thave a world made only with bacteria anymore.

(29:12):
We have a world made with humanbeings and trees and you know,
and all the end computers and allthose things. So from an evolutionary perspective,
evolution will either fail and collapse becauseit does not happen in a linear
way. Sometimes you have like youknow, it has to fail and then
to start over again. But itcan also upgrade itself very quickly. And

(29:37):
a whole part of the transhumanist movementsays this also, like don't count on
biological evolution and even biological substrates toaddress the level of complexity of the world.
No fucking way, Like don't eventhink of elections or changing institutions or
making a new constitution. It willaddress those things. It may improve of

(30:02):
course I don't see it as completelyneutral. But the real leap that we
need has to happen in an evolutionaryleap, which I always say myself.
I've always said those those things anAI has the potential to play a fantastic
role there, just like electricity thatyou can also kill someone with electricity,

(30:25):
you can run a rebel horrendous thingswith electricity. So I would just argue
on the potential, or I meanthe purpose versus potential, although just like
you, I claim that no newthing happens randomly, like the writing happens
because of the need for the tribalworld to move into civil civilizational world,

(30:52):
because then you could unite more millionsof human beings, which the tribal world
or tribal reality cannot do. Andyou needed the writing to do those things.
And it changed everything, not onlyexternally but also in our inner subjective
space as well. And so Ithink, um, if you and I
will live in long enough, inthe next few years and the next decades,

(31:17):
we will see I think a levelof leap so important, like an
evolutionary leap, a specious leap,or a specious evolution. But it may
also go very very bad, Ithink, So how do you think about
that? So? Yes, Ilike your answer, and I think what

(31:41):
you're saying is that it is upto us, individually and collectively to know
how to use this AI. AndI want to end. I know we
could talk for hours, and butI want to end with this last question,
which is a little bit more practical. I mean, you you advise
and you have leaders UM on aregular basis. So what's another way now

(32:04):
that we've set up the context andthe possibility for AI to become our greatest
savior or our worst nightmare? UM? What is a way maybe a more
positive or constructive way to see andexperience artificial intelligence and chat GPTUM? Is
it? You know? What shouldpeople how should people look at this UM

(32:30):
in a more constructive way? UM? What do you think I was going
to say? Going to add?Well, you pointed earlier that we need
to learn from social media, thatthe social media damages that we today acknowledge,
we need to learn from that fromthat and and bring this knowledge,

(32:51):
this understanding to AI. UM,But what do you think? What how
can people see AI in a morepositive and constructive way? Well? Do
you ask like see or do somethingbecause we can always see the two phases
of the coin, so then justa matter of seeing. But if you

(33:12):
ask, you know, what wouldthey do to contribute to a better world
with the leverage of AI, thenthat may lead to a few thoughts that
I can share. This let's go, let's got that path, Okay,
So let me start with the badnews. The bad news. I don't
think any vote, you know,your ballot will will change anything. I

(33:34):
mean, you can change something inthe old world, in the old mindset,
like you know, would you putmore money on school or on warfare?
And all these kind of things thatwe see in the conventional geopolitical,
pyramidical world as we know it today, and for most people that remains the
ultimate reality, like they don't think, you know, um outside of the

(33:55):
outside of that box. Okay,However, I would first I would invite
people to become knowledgeable in the forthcomingtechnologies like the crypto technologies and everything that
will enable distributed organizations and distributed societiesto rise. Rather than giving your power

(34:19):
and your sovereignty to other people,we have now the means to use better
tools and they arrive, you know, like the next five years, you
will have the capacity to use thistool easily without having you know, the
need of any sappiness on those things. They will just look like conventional tools

(34:43):
that we have on our cell phones, but they will not operate through one
centralized platform that belongs to a fewpeople, and will work in a completely
distributed way. So we need tobecome knowledgeable in those technologies and usages of
distributed A applications, because distributed meansdistributed power, distributed currencies, you know,

(35:06):
distributed voice and all those things,rather than centralized and with you know,
those hierarchies and social casts as weknow them. And I don't mean
like a bad criticism of them.I just say they don't work anymore.
They've done their job. You know, maybe we needed those those things,
but now they face we face theworld that requires highly individualized people. And

(35:30):
highly individualized people means social complexity,and highly individualized people do not like to
work and social pyramids and hierarchies.They want to have their own sovereignty and
interact in the mutual you know,dependency, interdependency rather than getting dependent on

(35:52):
some kind of social hierarchy. Okay, so if you want that, then
okay, good, hey, butmaybe you need an infrastructure for that.
Maybe you need some technology including AIand distributed applications that give the technical substrate
for this. And how do youget that, well by having the awareness
of this and then become you know, a supporter and enactor, a player

(36:16):
of those things. And you havemany ways first understand that, talk about
it, communicate about it, becomea better testers. You know, you
have so many ways develop new things, try new things where your communities,
you know, you can play thekind of game. So I think the
real shift can happen here. Andthe second thing using also those technologies,

(36:38):
free ourselves from money and use freecurrencies other forms of currencies, because as
long as you know, we mayall want to not become dependent on bad
AI. You know, well,okay, but we have this race,
this crazy race, like we allsee it in a completely powerless position,

(37:00):
like okay, everyone needs to winthe race, even with well intentioned people.
And then as you've got to makeyou know, a stronger and stronger
and stronger without even checking what we'lldo, and that can become a their
turbo thing calling calling the tragedy ofthe commons. And you have many other
kind of systemic emerging effects that justdrive us collectively onto the wall, even

(37:23):
if independently individually we want to goin the other direction. So the individual
will does not suffice. We needto understand the systemic forces and what drives
the main driver of those systetic forces. Today as a named money, we
need to use other currencies that willdrive a complete different, completely different social

(37:45):
contract. That's why cryptocurrency so important. Not so much about alternative to feat
money, but it's because it opensa door to a new form of power
and it empowers It allows other formsof communities to create values, and that

(38:09):
leads the society to move away frombeing dominated by money. But that's a
whole conversation, a whole other conversation. I really like what you're saying,
and really what I'm getting from whatyou just said was that CHAGPT as a
first element of artificial intelligence. It'snot just about finding titles or creating content

(38:35):
or even asking you know the toolsto do a number of things creating tables,
etc. But it's about not waitingto wait on other people to tell
you what to do. This isabout taking your power back and choosing to
spend time and acquiring knowledge on whatthis tool can do for us individually and

(38:57):
collectively and being part of those discussions. Yes, absolutely, I'm what you
and what you said about you know, like cryptocurrencies. Um, I would
maybe add a little thing to whatyou said, like, do they provide
some next level um of you know, distribution of power. Absolutely yes,

(39:19):
but we have to see it.I think as just like the early days
of the writing. In the earlydays of the writing, he had an
elite controlling the writing. He hadhe had the scribes, you know,
the knowledgeable people who would control,and then it would control society with this,
Okay, And it took a fewthousand years before it became you know,
accessible to everyone, and then youcan really have an explosion of ideas

(39:42):
and communication and the growth of cultureand knowledge. But for the most part,
if you look at the past fivethousand years, in most places,
the um the democratization of writing andreading happened like they very recently, okay,
And I think we see the samestage now. I hope we will
take more than I mean, lessthan five thousand years, but we see

(40:06):
the same stage. Like today,cryptocurrencies remade control in the hands of highly
literate people, even those with goodintentions, but also in the control of
big powers, you know, biginvestment funds who make you know the ups
and downs, you know, thepump and doting on cryptos, and it
has turned into a very insane kindof game in the old world. And

(40:30):
it doesn't mean it doesn't have thepotential. Just like see the writing,
same thing. It has a hugepotential, but today we haven't used harnessed
the real potential of both you know, cryptocurrencies and soon to come crypto technologies
that embrace something much bigger, fantastic. Well, on that note, thank

(40:50):
you so much for spending the timewith me and the people who are going
to benefit from this wonderful conversation.There's so many nuggets that you said.
It was really enlightening and I can'twait to go back and just listen.
We'll do a blog post as wellas we do as always try to bring
sense, m and clarity to aworld that is very complex. So thank

(41:13):
you so much. Rancwll my pleasurewhenever you want we can have any of
those conversations. Thank you.
Advertise With Us

Popular Podcasts

My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder is a true crime comedy podcast hosted by Karen Kilgariff and Georgia Hardstark. Each week, Karen and Georgia share compelling true crimes and hometown stories from friends and listeners. Since MFM launched in January of 2016, Karen and Georgia have shared their lifelong interest in true crime and have covered stories of infamous serial killers like the Night Stalker, mysterious cold cases, captivating cults, incredible survivor stories and important events from history like the Tulsa race massacre of 1921. My Favorite Murder is part of the Exactly Right podcast network that provides a platform for bold, creative voices to bring to life provocative, entertaining and relatable stories for audiences everywhere. The Exactly Right roster of podcasts covers a variety of topics including historic true crime, comedic interviews and news, science, pop culture and more. Podcasts on the network include Buried Bones with Kate Winkler Dawson and Paul Holes, That's Messed Up: An SVU Podcast, This Podcast Will Kill You, Bananas and more.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.