All Episodes

July 20, 2025 115 mins
“Any sufficiently advanced technology is indistinguishable from magic.” So states Sir Arthur C. Clarke’s famous Third Law. On Sunday, July 20, 2025, at 1 p.m. U.S. Pacific Time, the U.S. Transhumanist Party Virtual Enlightenment Salon will delve into this insight further.
In "Computer Science and its Occult," Eric Hennigan, a Software Engineer at Google, explores the intriguing parallels between concepts in computer science and themes found in occult traditions. The talk delves into ideas like "True Names," which in computer science can uniquely identify a thing and provide power over it, much like a memory address. Hennigan also discusses "Sigils," demonstrating how autonomous vehicles can be "trapped" by specific markings or by placing a traffic cone on their sensors, rendering them inoperable. The presentation further examines "Spells" in the context of sorting algorithms and code snippets. Hennigan draws a compelling connection to "The Golem of Prague," comparing the legendary animated clay figure to the modern process of baking silicon into crystal, etching patterns, animating with electricity, and commanding with scripts, ultimately teaching machines to learn, write, speak, and paint. This leads to profound questions about consciousness, ethics in creating sentient beings, and the nature of free will in machines.
Eric Hennigan is a Senior Software Engineer at Google and a Volunteer Genomics Data Scientist at the University of Southern California. He holds Bachelor of Sciences degrees in Applied Mathematics and Physics from University of California Los Angeles, and a PhD in Computer Science from University of California Irvine. He enjoys communicating what he has learned to others and even acted as Instructor of Record for UCI and Cal State Fullerton while still a graduate student. Despite his formal education he recommends self-study as a path to skills improvement.
Watch the first U.S. Transhumanist Party Virtual Enlightenment Salon with Eric Hennigan, held on October 23, 2022, on the subject of historical anti-aging approaches that failed: https://www.youtube.com/watch?v=TMod8KtspxQ 
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Greetings and welcome to the United States Transhumanist Party Virtual
Enlightenment Salon. My name is Jannati Stolieroth the second and
I am the Chairman of the US Transhumanist Party. Here
we hold conversations with some of the world's leading thinkers
in longevity, science, technology, philosophy, and politics. Like the philosophers

(00:22):
of the Age of Enlightenment, we aim to connect every
field of human endeavor and arrive at new insights to
achieve longer lives, greater rationality, and the progress of our civilization. Greetings,
ladies and gentlemen, and welcome to our US Transhumanist Party
Virtual Enlightenment Salon of Sunday, July twentieth two than twenty five.

(00:45):
Today we have a fascinating conversation in store for you,
which derives from a now famous statement made by Sir
Arthur C. Clarkass Third Law, which states that any sufficiently
advance technology is indistinguishable from magic. And today we'll have
ample examples of that in the realm of computer science.

(01:09):
But first I would like to introduce our distinguished panel,
including our current Vice chairman and Director of Visual Art Art,
Ramon Garcia, our Director of Scholarship Dan Elton, our director
of Citizen and Community Science, and twenty twenty four US
Vice presidential candidate Daniel Tweed and our Technology advisor and

(01:30):
for an Ambassador in Spain, doctor Jose Cordero, and our
special guest today returning for the second time to our
Virtual Enlightenment Salon, is Eric Hannigan. He is a senior
software engineer at Google and also a volunteer genomics data
scientist at the University of Southern California. He holds Bachelor

(01:53):
of Sciences degrees and applied mathematics and physics from the
University of California, Los Angeles, and the PhD in Computer
Science from the University of California, Irvine. He enjoys communicating
what he has learned to others, and even acted as
instructor of record for UCI and cal State Fullerton while
still a graduate student. And despite his formal education, he

(02:14):
recommends self study as a path to skills improvement, and
some of his self study has led him to fascinating
discoveries and insights, which he synthesizes from time to time
into quite fascinating presentations, and the one you will hear
today is indeed among them. It is called computer science

(02:37):
and it's a cult. And it seems Eric that you
are wearing some interesting attire for this presentation, which is
actually quite relevant and it will be explained in due time.
But Eric, welcome and please go ahead.

Speaker 2 (02:53):
Thank you for the introduction. Natti, I'm very happy to
be here with the transhumanists and to give up what
I hope to be quite entertaining presentation and conversation about
the parallels of practicing programming and being a magician. So
I believe, after a careful reflection, that we are modern magicians.

(03:19):
I do happen to work for the Google, but I
am not representing them here. I am just having some
fun with this connection. So we have a joke in
computer science about naming things. It is one of the
two hard problems. So there are naming things and cash

(03:42):
validation and off by one errors. Those are our two
hard problems. But the reason naming things is quite difficult.
It's not just because they have power, but it's because
the world changes over time. So the name that you
may have originally give and something doesn't quite fit after

(04:03):
a few months because your space has changed, your customers
want something different, and then you're just sort of stuck
with that old name, even though it's ill fitting. I
was thinking about how this relates to magic, because in
many magical stories, once you have the power, once you
have the name of something, you get power and mastery

(04:25):
over that thing. And it occurred to me that the
true name of anything within a program is its memory address.
Once you have the memory address, then you can alter
the value of what is stored there, and you have
true power over its contents. So memory addresses are the

(04:50):
true names for the software engineers. And the first rule,
of course, you don't want to allow anybody to have
the name if you do not want to give them
permission to edit the content there. So we definitely make
a lot of effort within the structure of programming in

(05:13):
order to encapsulate those names and share them only when
we need the other party to make edits or modifications.
So controlling your variables and controlling the values is a
very important part of writing a program that is understandable
and that works well into the future. It appeared I

(05:38):
jumped ahead a little bit right. The three things. The
two things that we have problems with are cash and
validation and naming things and off by one error. So
I already revealed that joke to you, which is fine,
and what can true names do? Right? This cartoon in

(05:59):
caps relate It's what I'm saying. We have the man
in the black hat who is possibly in control of
the XKCD universe, and a poor programmer who is trying
to engage in i think, bettering their gameplay and ask

(06:19):
for a few pointers. Of course, the man in the
black hat gives him memory addresses, and this is not
of any use to the poor game designer or player.
So oh well, but that really just emphasizes the point.
When you're a magician, these are the values that make

(06:41):
sense for you. These are the ones that give you
the power, and when you're an ordinary mortal they are
completely useless and not very well understood. Another interesting aspect
of magic is that you can control a demon, or

(07:02):
at least trap it with a salt circle. So if
you are to summon a demon, you do not want
it to get out free into the world. You want
to keep it sort of in a controlled space so
that you can interact with it safely, and the way
to do that is to form a salt circle. Now
in twenty eighteen, there it's this wonderful artist, I believe

(07:26):
in Europe, Great Britain, and he thought, oh, these autonomous vehicles,
they are following the lines on the road, and sure
they're autonomous, but what would happen if you were to
paint the lines on the roadad in such a way
that it surrounds the vehicle and it turns out at

(07:47):
that time the vehicle is just trapped. So if you
were to paint these lines with salt, then you are
essentially trapping a demon and preventing it from getting loose
onto the roadway. So these are just like really fun
ways of playing with the idea that computer science are

(08:09):
programming is a magical art. We can actually use salt
circles to trap some of our devices. As recently as
twenty twenty three, you were able to confuse one of
the autonomous vehicles merely by placing a cone on the
hood of the vehicle so it was not enjoying that

(08:30):
sort of data and it would pause and not go,
not drive anymore, and not hit anyone. And it's a
really clever, low cost way of disabling such a device
if you don't want it running around on your streets.
Now we know for sure that the operators in this case,
it looks like weimo they're going to have to come

(08:53):
by and remove the cone at some point. But just
really hilarious way of partitioning off what is magic for
what is not? Just with a low cost hack. And
if you know these it turns out that magic is
a collection of such hacks, and engineers just learned these

(09:15):
things over time. Where do we learn them? It turns
out that we have special schools. No, they're not really
that special. It's universities, the kind of like Hogwarts, I
guess No, this is really just the MIT Computer Science
and Artificial Intelligence Lab, famously Sea Sale And at these

(09:40):
schools we train the engineers over four years with all
the tips and tricks of the trade, and so upon
graduation then they can do things that ordinary people can
no longer cannot do because they haven't learned. Now, there
is a way to speed up this process. You don't
have to get a degree in order to be a
pro grammar. You can actually just learn all the content

(10:03):
online within about a year or so. It's very short,
very low cost. Our magical enterprise is not very well protected.
In fact, we are encouraging people to join in and
getting away the knowledge for free online if you know
where to look for it. The schools, though, are not

(10:23):
the only thing that makes computer science is a practice
of magic identifiable. There is another really interesting thing. We
have a special attire, and it turns out to be
the hoodie. Right, So I did a quick image search

(10:45):
for hacker and all of them are dressed in hoodies,
which I find delightful. Right, So we go ghostly figures
at a keyboard, trying into the sea belly of what
operates today's society and making small tweeks and modifications so

(11:06):
just you can recognize us through our special clothing, which
very amusing, very amusing. At the schools we receive magic lessons.
It turns out that this lesson in particular, which I
don't want to play here because it's a full hour
on its own, is Gerald Sussman. He's a teacher at MIT,

(11:32):
and he has written a book on the Structure and
Interpretation of computer programs, which in this particular lecture he's
talking about the meta circular evaluator, which I will describe
a little bit more in detail later. What I really
enjoy about this lecture is he decided that this was
like the magic part of the entire class, and so

(11:56):
for this lecture he wears a fez and explain talks
about how that lecture will be an investigation into the
magic of what makes lisp as language work and how
to implement it. Let's see if I can skip past
the video, play it okay. In these classes we learn

(12:24):
how to make spells. This is one of our oldest spells.
As you can tell, it's on a punch card. Our
modern spells are stored as plain text files, which is
much more easy for us magicians to read and write
and manipulate, to share and review. But in the early days,

(12:45):
the computers were not quite so advanced, and we used
punch cards. And I think this just emphasizes the esoteric
nature of what it means to be a programmer. If
I were around and I had to use this technology,
I would, But the fact that I can use more

(13:05):
convenient things like text makes my ability is much more
powerful because it's faster and it scales better. But it's
not the thing that motivates me. The thing that motivates
me is just describing what an algorithm does or what
a computer should do. And if this is the interface,

(13:25):
that is what I would learn to use and it
would be all the more esoteric for having to go
through that process. And it is quite amazing that as
humans we've figured out how to build such things. And
if I want to take a brief aside, since this
talk is really just about fun, the punch card originates

(13:49):
from the loom all the way back in the early
industrialization of our technology. So when you are weaving a fabric,
my fabric happens to have a sort of brocade in
it and that you probably cannot see, but these patterns
needed to be described by the operator at the loom,

(14:12):
and eventually that led to a little bit of mechanization,
and the way to describe which threads are up and
which threads are down in the weave was through a
punch card. And that was early industrialization, and it just
led in through to using computers as well, because the

(14:33):
absence or presence of a hole in the card is
something that a machine is able to read. We have
many spells. One of the earliest ones, if you are
a computer science engineer and you go through the program,
is to learn about sorting. And there are many ways

(14:54):
to sort, like quick sort and merge sort, and a
tree sort and all sorts of things. They each have
their trade offs, right, their best average and worst case performances,
and how much space they take up when you're operating.
But I highlight the sorting spells in particular because you know,

(15:15):
there's a Harry Potter reference here. How it's one of
the basic things that computer scientists do, which is to
categorize the data into different places, and once categorized, to
maybe give it an index, and once it's indexed, to
be able to find it right. And sorting plays a

(15:37):
very critical role in all of that infrastructure. There are
many more esoteric spells that we have, so the computer
scientists among you might recognize this one. It is a
I didn't give the function name here, Well, it's a send.

(15:59):
But this is a way to just copy data from
one place to another. And you can see the addresses
the star too, and the star from we're going to
copy data from one place to another. As a computer
scientist reading this, it really breaks your brain because you
have a switch sort of nested inside of the do

(16:24):
wile loop, or the do wile loop is nested inside
of the switch, and they're sort of intermixed and interplayed,
and that is not a normal thing that you would
do in a program. In fact, many of the modern languages,
the structure programming languages, don't even allow you to write this.
And this is called Duff's device. It was created in

(16:49):
order to copy data from one place to another. I
think it was done by Duffit LucasArts copy data from
one place to another as efficiently as possible. He thought about, well,
how I do this if I were writing assembler, and
this is how he would do it. And then he
translated that assembly into C code, and lo and behold

(17:10):
it compiled. Even though nobody had written anything like this
with the structure for like ten or thirteen years of
the C compiler existing. So it was quite a surprise
that such a challenging syntax even ran through and did
the right thing without having some weird error or complaint.

(17:31):
And again, the modern languages don't even let you nest
these structures. You have to have them separate. There's another
one which I find quite amusing because it is really
really abusing things. So I'll step through it a little bit.
We have a float and it's just half of x,

(17:56):
and then we have an integer derived from that float.
What we're going to do is we're going to say,
take the address of the float, treat it as if
it were the address of an imager, and then de
reference that. So what this does is it's going to
give you the floating point bits as and interpret them

(18:18):
as if they were an integer. There's no changes here
to the bits. There's no changes to the bits. But
float values are not something you would ever treat as
an integer because you're just going to get some weird,
crazy number because the way that the floats encode the
data they have, it's not a sensical imager. And then

(18:40):
after that we're going to do with that nonsensical integer,
We're going to take this other crazy number ox five
f like blah blah blah that we pull out of
a hat somewhere, it's not explained crazy number, and just
subtract that imager shifted one bit over so I think

(19:01):
that's a divide the integer in half and then subtract it,
and then that's a new number of bit like in
bit representation as a new number. And what we're going
to do is we're going to take it like the
computer thinks it's an indusger now, so we'll take its address,
treat it as a pointer to a flow de reference
the float. So now we take those rabbits and treat

(19:21):
them as a float. Again, crazy thing to do. Just
take the rabbits and shuffle them around. Wild thing to do.
But it turns out this is an extremely important piece
of software because it operates extraordinarily quickly. It was written
for game rendering, in this case doom, and it computes

(19:47):
I give you the function name. Now at the top,
it computes the inverse square root, which is something to
do if you're doing any kind of triangulation, like a
squared equals b square plus e squared. Do you want
to take the square root of that other side and
find the distance to what you're looking at? And this

(20:09):
little computation does exactly that. There's more information on Wikipedia
about how the mathematical genius behind this trick and where
that magic number comes from. I'm surprised that it works
at all, and I'm doubly surprised that someone was smart
enough to find it. We also have special languages in

(20:35):
our field, right. There's actually a huge number of them.
You may recognize languages such as C plus plus or
Python or Java JavaScript. There's other languages. HTML not a
programming language, but a formatting language. We've made many different

(20:56):
specialized languages for each of the different things that we
might want to come to do. For example, I've recently
been learning OpenSCAD, which is a language for describing three
D shapes that you might want to print on a
three D printer or cut with a computerized mill. And

(21:17):
it turns out that all of these things we've recognized
and abstracted a particular kind of pattern two language itself.
And that's the real power of computer science is kind
of an offshoot of mathematics studying language structured languages meant
for machines but readable by people. And we found out

(21:39):
that this is a book by Nicholas Viers, my grand advisor,
so my advisor's advisor at graduate school, and identified that
algorithms and data structures together make a program. And I'll
explain a little bit what that means. So we have

(22:02):
this mind bending recursion. What we got on one side
with the very clear lines, those are various kinds of
data structures. It looks like they are put in a
linked list of some kind, so you can walk through them.
But there's something that looks like a tree, one that

(22:22):
looks like a graph on the bottom something that looks
like a skip list. And these are just ways of
arranging the data and what points to what, and we
describe this to each other with the diagram. That's the
data structure part. And they're really cool and we study
them in school. Now, it turns out if you are

(22:45):
to take a computer language and you want to do
something like automate the translation of the text that the
human rights into the operational codes that the machine executes,
so you have to make a translation between the programming
language and the assembly that runs. Then the way to

(23:08):
make that translation is to parse the language that the
human is written into a data structure that you can
operate on. And that's what's represented by the more black
box is you can see there's a lot of different
linguistic elements. But what you end up with is you
end up kind of like with a data flow or

(23:29):
a control flow that shows the moving pieces of what
the programmer is talking about when they're writing their code,
and with that, the compiler can treat the input as
a graph and make all the changes and operations and
transforms that it needs to do one piece at a time,

(23:51):
and spit out the assembly at the end that the
computer can execute. So we have this interesting thing that
what we consider as programs are the things that we
are able to be writing and making and operating on right.
So there's a bit of recursion here. Programs are not

(24:13):
just things that you run, like your web browser that
you run or a calculator that you run to get
a particular result. They're also a thing that can self describe,
which gives it an enormous amount of magical power. We
have special spell books here at I highlight again the
structure and interpretation and computer programs. But any graduate that

(24:36):
goes through the program is going to accumulate one or
two books for each of their classes, and at the
end they'll have maybe a dozen of books about how
to program and how to do various things, and various
languages and design and software engineering and all sorts of stuff.
Here I highlight again the metacircular evaluator because it's the

(25:01):
program that is able to take a program and run
the program right. So it is of a special nature.
It's not just taking any kind of input three plus
five and generating any kind of output eight. It is
taking a computer program in text as input, and it

(25:22):
is spitting out the operation that that text says that
you should do, and it itself is one of those programs,
so it can run itself, which is an amazing piece
of mathematical genius. I don't have a slide on this.
That Alan Turing discovered is that once you have a

(25:45):
certain amount of computation, it's universal that that machine can
be used to describe other copies of itself and emulate
those copies the execution of those copies. And this is
one of the genius things that we do as computer scientists.
There's a lot of cultural impacts that computer science have made.

Speaker 3 (26:07):
Right.

Speaker 2 (26:07):
It's not just that we completely change the way everybody
works by putting them on a computer, but we see
many different references to computer science in popular culture. So
here's Arthur C. Clark. Any advanced technology is indistinguishable from magic.
That's mostly because the details are hidden, they're complicated. Only

(26:31):
a trained person can figure out those details and work
with them. It's a magician's enterprise. But here in Babylon Five,
for anybody that was around to watch that during the nineties,
there is a character that plays I think only for
a couple of episodes, and he's a techno made. His
job is using technology to make really flashy presentations like

(26:53):
a stage magician with two and I see you know
when you look at the back end of any concert
or anything, they have audio engineers, they've got computers, they've
got their software. They is like a lot of this
is driven by output from this magical field just for
the purposes of entertainment, which is really cool. But he

(27:15):
even South Park, which looks like cardboard cutouts, is all
computer animated. So we're using amazingly powerful devices to emulate
the effect of cardboard cutout. For anyone that's watched Adventure Time,
there's also a very sad story about the wizard King

(27:36):
and his wife Betty, and both of them become a
wizard in a witch, respectively. Poor Simon loses his mind
and Betty dramatically pursues trying to reunite with him, but

(27:56):
alters herself in the process so much that that rebirth
of their relationship can no longer occur. So it's a
very sad story, but it's one of struggle and development,
and it's one of magic and computer science because in

(28:17):
one of these episodes, they are going through the crown
that has made Simon crazy and they're trying Betty's trying
to debugget so here she appears as a floating head
of an avatar, and she is really quite capable of
looking through all the circuitry in that crown, right, and

(28:38):
having a technological ground that makes you a wizard is
kind of really emphasizing this accult computer science aspect. And
they go through the crown and she is not She
sees that she is not able to fix it so
that she can have her husband back, which is really
quite a sad outcome. But we got a techno mage

(28:59):
and a techno wis here in a cartoon, just showing
up for fun and delight and drama. I was talking
about this similarity between computer programming and computer science and
magician with my family, and they said, oh, you know what,

(29:21):
there's this guy that's written a whole bunch of books,
The Laundry Files Charlie straws about exactly that coincidence. Now,
I haven't gone and read any of these yet. I
have read other works from him and it was quite enjoyable,
so I will probably read these and expect them to

(29:42):
be enjoyable as well. But they all also predicated on
the idea that anybody operating as a computer scientist or
as a programmer is essentially acting as a wizard, and
he really emphasizes that duality in his books. Now, I'd

(30:02):
like to turn our attention a little bit to an
old Jewish story called the Gullum of prog And according
to the legend, this Rabbelo and his community within Prague
was under an attack, being terrorized by the surrounding gentiles,

(30:27):
and they were facing in sort of extreme situation. So
what he did is he decided that he would go
down to the river and he would take some clay
and he would shape it into a guardian, and then
he would write some Hebrew script, a prayer essentially that

(30:48):
contains a code, because it turns out in the Jewish
alphabet also doubles as numerical symbols. So their first letter
alpha is considered one, and we can do the same
with ours, from A to z one through twenty six.
They do the same with theirs, and so you can

(31:11):
kind of treat a sequence of Hebrew letters as if
it were a sequence of numbers. And if you have
that mindset, you can say, oh, well, this is saying
something in a code that might allow you to engage
in magic. And the Rabbi Lo did exactly that. He
wrote down the Hebrew script. It contained a certain name

(31:34):
of God and animation. He feeds it to the guardian.
This picture doesn't have a mouth, but in some of
the stories, he is giving that scroll into the mouth
of the guardian, who is able to process and execute
those descriptions as a program, and that program is act

(31:55):
as the guardian for the Jewish community in Prague and
protect it from these attacks. One of the interesting things
to highlight in this story and in subsequent Jewish literature
is they identified the ability to speak and the ability
to name things like from Genesis as a very strong power.

(32:22):
And so the gollum itself is not allowed to speak
because it should not have this power. It is a
wordless slave that executes the commands that you write on
the scroll. And you have to be very careful about
those commands because like computer, like any computer, scientists will

(32:43):
tell you the computer does exactly what you tell it
to do, and that may be different than what you
intended to tell it to do. And this is a
problem that continually crops up and why our programs have
various bugs. It's an enorma complexity to get all of
the littlest and some of our intentions are themselves in

(33:07):
conflict and it's difficult to resolve them. So here the
Jewish mythology is identifying the problems. Like you can say
what you say, but the interpretation of that by the
machine might might be quite literal and not what you intended,
So be careful how you say things. And the reason

(33:29):
I highlight this story is because I think as a
field we are engaged in doing executive So we have
gone into the world and we have taken silicon, which
happens to be like from sand in river beds, and
purified it and baked it into crystals, because all as

(33:53):
we know from stories of Atlantis, all advanced civilizations are
based on crystals. The stuff into a crystal and then
we etch particular patterns into it. And at this point
we're so advanced and pushing these patterns so small that
we're using X ray lithography, and some of these circuit

(34:17):
traces are like two nanometers and we're running nearly up
against the limit that transistor could you know, can't be
smaller than an atom, So we're hitting some hard limits here.
But this approach, like we took the sand, made a crystal,
we give it some particular patterns. We have a way

(34:38):
of animating it with electricity to bring alive so that
it can do things. And the earliest versions of this
were computer circuits that like did calculations. So some of
the earliest things that we wanted to do were for
accounting and census taking by the US government. We made

(34:59):
a bunch of computers for that. The military was very
interested in using computers for calculating the trajectory of their weapons,
so we made that used to be done with like
book tables, and then we automated it into a computer
so that you can aim your weapons properly and account
for wind and drag and all the other things. So

(35:20):
then out of that comes modeling physics and modeling the world.
All of this stuff very exciting, all animated by and
you know, the operationalization of mathematics in our world. So
once you have the circuitry, then we start a process

(35:40):
of oh, if we arrange the circuitry in a particular way,
then maybe it can do arbitrary computation and I can
just write a script and it will execute that script, right,
So I don't have a circuit that's only doing mathematical calculations,
like an eight digit calculator in your pocket. But now

(36:01):
it can do anything I program it to do. And
so we invent these spells, these scripts that we can
give to the computer and do anything we want. Here
it looks like we have a logo the turtle, and
you program logo the turtle to just like take a
few steps forward and turn left or right. And once

(36:22):
it does that and you put it in a loop,
it does that repeatedly. Then he can get these nifty
spirograph like patterns. After doing all those things, what's the
next problem to solve? Right now it can do arbitrary stuff.
The next problem to solve is, oh, can it learn
to do things on its own? Because it is awfully

(36:43):
tedious for me to describe to the computer what I
want it to do in every single circumstance, and in
many cases I don't know myself how to write that program. So,
for example, for a self driving car, it has to
take imagery data and light our data and parse that
into the into a representation of the world, and then

(37:06):
make a decision about how to move left or right
or avoid an obstacle, identify the obstacles, park, wait for
various things like this is a huge number of different
concerns to try and program. There's no way that humans
are going to get all those details right. So what
we did instead is we figured out, oh, how do

(37:30):
our brains operate? Well, we've got these little neurons and
they connect together and they fire signals at each other. Okay,
can we make a mathematical model of that? Yes, we can,
and it looks an awful lot like some matrix multiplications,
And so we make that model, and then we get
the computer to go through training exercises where it's first

(37:54):
getting everything wrong, and then we tweak all the weights
in the model, the little values of these matrix multiplies,
and over time it starts getting the answer that we
consider the correct answer for the examples that we have,
and the computer is now able to learn. This is
extremely convenient for us because now we can solve problems

(38:18):
for which our ordinary scripting languages are not suitable in
describing the solution to. But it's also a very dangerous
aspect because now the computer can learn, and so we
can do a lot of things that we may not
have anticipated or may not desire for it to do.

Speaker 4 (38:38):
One of the very.

Speaker 2 (38:39):
First things, ignoring the Jewish literature that told us not
to do this, that we decided to do is get
it to write. So one of the very early programs
was Eliza, which acts as a therapist and it's sort
of hard coded to give these various responses. Most of
them are questioned, and it turned out like this, this

(39:04):
was very much a long time ago, this not using
the neural matter or anything. It turned out that the
people who wrote this found it effective as a therapist
because the questions forced them to think up answers. And
that's actually most of the benefit of therapy is that
you're just reflecting on why and what and how. And

(39:28):
even though they knew the trick, it still was effective,
which is really cool. But they can write, and now
we know, like if you've ever interacted with chat, GPT
or Gemini or Perplexity or any of these models, you
know that one of the things they can write is
computer code, which is you know, skirting on the edge

(39:50):
of well, if they can write computer code, maybe they
can optimize the code that we already have that teaches
them how to learn. And then if we put that
into a feed back loop off they go on their own.
We also taught it to speak to replicate sound. This
was one of the early things that I forget his name.

(40:14):
The ray Kurzweil worked on is a speech to text synthesizer, right,
and this is speech text is actually really really important
for a handful of people that don't have the ability

(40:35):
to see anything but do have the ability to hear right,
So it's very important for them. But it also happens
to be very convenient for the rest of us that
instead of reading an article, which I can't do as
I'm washing the dishes because my eyes are busy looking
at the dishes, I can listen to the article instead
and saying what's driving. Lots of us are doing a

(40:55):
commute and we can listen to articles and books using
this technology. But again, the danger is there the computers
are now able to talk to us. We also taught
the computer how to paint. This is a montage of
some of the earlier images each release. I noticed these

(41:18):
are getting much much better, which is really quite amazing
and possibly faster than humans learn how to draw the
fact that it can do so many styles is also
really quite remarkable. It has an enormous collection of knowledge
that it has received from gobbling and consuming the entire

(41:43):
Internet and whatever else these companies are able to find.
But all of these skills raise some important questions. What
does it say about us? So we got here by
thinking up by observing, how do our neurons at a
cellular level operate? Turn that into a mathematical model, operationalize

(42:06):
the model with some computer codes so that it can
be executed, and then scaling it up to many billions
or trillions of neurons what they called parameters in the model.
Are we are our brains also doing this, not exactly
in the same way, but in a very similar way.

(42:27):
And so are we just programmed by our life experience? Well,
there's a lot to say. Yes, yes we are. Are
we computers? No, not in a fixed silicon sense. Do
we compute? Yes, we absolutely compute. We can add digits
and do various other things and make decisions. So it

(42:49):
building something similar to us really highlights our differences and
leads to increased understanding of ourselves. That begins to be
a little bit scary. How do we make one of
these things, you know, more like a person? So at
the moment, they have an extraordinary amount of knowledge, but

(43:12):
they may not be able to apply that knowledge very well.
They're making a lot of what I consider quite beginner mistakes.
For example, I asked it to draw me a photo
of a fiddle leaf fig and it did it really,
really well, very artistic, except to put the roots on
the outside of the pot. There's a very beginner mistake.

(43:36):
At least it knows the plant has roots. I don't know.
These are interesting mistakes. Is the computer conscious not in
the way that we have currently arranged the computation to
take place, But in the future we could do a
different arrangement and maybe that would be conscious. The difficulty

(43:56):
here is coming up with a definition of conscious that
everybody can agree on. I haven't seen one, so maybe
I'm doubtful that consciousness exists since they can't be defined.
There's a lot of interesting ways to look at this.
Marvin Minsky looked at a society of minds, so he said,
if you are to make a brain or a person,

(44:20):
what you would do is you would make lots of
small parts that interact. Each of the small pieces does
not have a lot of intelligence on its own, but
together as a collective, they give an emergent behavior of
an intelligent being. There's a lot to be said that
this is how our mind works, because when we look
at patients that have brain damage, we can see there

(44:42):
are isolated, somewhat isolated modules that look like they are
taking care of particular things. There's language center, there's vision center.
There's a lot of pieces, and none of those pieces,
all of those pieces seem to be like foolable. I
can give you a optical illusion that fools your visual

(45:02):
cortex into interpreting the scene in a way that's unrealistic,
but it's not. We don't yet know how to stitch
those together to make a fult being that we would
want to say is like us. Is there a necessary

(45:23):
requirement to have biological neurons instead of silicon neurons. I
don't think there really is. I think one can emulate
the other. But maybe speed does matter. So for example,
silicon can do really quite fast operations. My neurons are
much slower. If I were to think through all the

(45:44):
computation that a computer, I would never even foot up
the compiler to do anything because every little mathematical operation
takes me a long time. It opens a question like
if all of this reducible to computation, do we even
have consciousness? And if so, where would it be? So

(46:06):
in the middle picture here I have John Searle, and
he raised a question. He thought about, oh, what if
you have this, what do you call the Chinese room experiment?
Where you take an English speaking person and you stick
them in a room and you give them a book
about how to translate Chinese characters, well, from sequences of

(46:28):
Chinese characters to sequences of Chinese characters. So he doesn't
have any idea what he's doing. He's just doing simple manipulation.
But everybody feeding of Chinese questions and conversation into this
room and getting his responses out also in Chinese, thinks that,
oh yeah, it's like a live person in there. It's

(46:49):
pretty intelligent, it says reasonable things, all of what we
the interaction that we get with in LLM. Right, But clearly,
where do you want to place the understanding of Chinese
in the English speaker's head? Probably not in the book
also probably not. And then it becomes a question of like, oh,

(47:09):
it's part of the system as a whole, but you
can't attribute it to any of the individual pieces. So
do two of my neurons have consciousness, Probably not, do
you know many billions of them? It looks like maybe
they do. How do we create sentience? So here's another experiment.

(47:32):
We could just be a brain in a vat. So
if you give my brain the right inputs, then it
is happy and it doesn't even know that it's sitting
in a little simulation of some kind. And there's a
lot of pieces that we give to humans of free will,
human rights, accountability, a responsibility for doing things. Where are

(47:59):
those in our brains? I don't know right? How would
you make a computer? A lot? I don't know, But
we keep working on the problem and eventually we'll probably
you know, hit on the answer accidentally, maybe without realizing it.
Here here's Johnny five from a movie short Circuit, which

(48:22):
I think was in the in the eighties. Johnny five
was a military robot program to blast things with a
nice weapon, and he went rogue. He decided he didn't
want to have that job and would rather just be friendly.
So the movie is about him trying to escape his

(48:43):
programmed fate. And then there's a free will debate. Do
humans have it? Do machines have it? Can we give it?
To the machines. Do we have it ourselves? I have
my opinions, and many people differ. And that concludes are

(49:04):
parallels between magic and computer science and all the questions
that this raises about the nature of our being and
how we operate in the universe.

Speaker 1 (49:17):
Yes, thank you very much, Eric, and indeed quite an
intriguing presentation that does raise numerous questions. And for our
questions today, let us start with our panelists. So anybody
from the panel who has questions please feel free to
ask them to Eric, either Art Ramon or Jose or Daniel.

(49:42):
I have a few questions of my own, but I
will prioritize the three of you.

Speaker 5 (49:48):
Well, I'm curious. I think if Eric has you know,
grounding in the different schools of magic, and if those
have an all analogy to different schools of computing, maybe
that would be an interesting analogy to raise. You know,
you've got Solomonic magic, Gardenerian magic. I mean, there's all
these different, you know, little sub niches.

Speaker 2 (50:08):
Uh. So that's a really really good question. And I
have not taken any d dive into the differences or
tried to categorize any parallels between the schools of magic
and the and the different things we do in computer science,
because we do have a lot of different things, Like
there's software engineers, there's researchers, there's people working on fundamental

(50:33):
mathematical questions about the nature of computering itself. And I
presume we could find some of those parallels.

Speaker 1 (50:45):
L Yes, Jose or act Ramona, any questions for Eric, Yes, sure, Eric.

Speaker 3 (50:53):
Now, very interesting presentation, and actually I lag when you
were addressed actually like the grem Reaper a few days
ago at Radfirst Revolution against Agent and Death.

Speaker 2 (51:09):
I don't know if you have.

Speaker 3 (51:12):
The gream Reaper equipment, but it was a very fun
activity going to your presentation. Also, I was very happy
to see that you put the Computer Science and Artificial
Intelligence Laboratory at my alma mater MIT. Actually I was
a student of Marvin Minsky, so I was very happy

(51:35):
that you mentioned him too. But let me show you
something that really makes people laugh, and it's not magic,
but it's history. I have one of these punch cards
that you showed. These punch cards actually had different sizes,
but normally they were like ten by a hundred. Ten

(51:57):
by one hundred makes one K, one K one thousand
and one K I used this when I went to
MIT at the Computer Science and Artificial Intelligence Laboratory at
MIT over forty years ago. But even more interesting, I said,
this was one K in Spanish is one car. My

(52:18):
mother tongue is Spanish, so one ca. And then the
flopping is came out. This is the first generation eight
inches long, eight inches long, and it was also one
K one K, but this one K was better.

Speaker 2 (52:34):
Because you could erase.

Speaker 3 (52:35):
You could erase the information here, and it had a
bigger hole, you know, it had a bigger hole. So
I like to say, forty years ago when I was
at MIT, I had one mechanical K and one electromagnetic K.
So in Spanish one cap plus one cap makes one

(52:56):
caca one caca. As a student m I T I
used KAKA and now I have a pen drive here
of one terabide one terabite. So we have moved from
kaka to terrabytes and this is not stopping any time soon.

Speaker 1 (53:17):
So that is my point.

Speaker 3 (53:18):
So I don't know if you agree that Moore's law
and actually, as my friend Ray kerswell that you also
mentioned Rakers doesn't like Moore's law because Moore's Law is
only a part of what he calls the law of
accelerating returns. The law of accelerating returns that will continue

(53:40):
even if Moore's law stops, because there are many more
things happening on this law of accelerating returns by Rakers. Well, so, anyway,
so how do you see going from kaka to terrabytes
in forty years?

Speaker 2 (53:56):
I was in alive for some of the earlier pieces
of that because I was only born in the eighties.
But the fact that you have experience using the old
magic is really really quite awesome when we have come
really an extraordinarily long way. The More's law advertises a

(54:19):
doubling of the number of transistors on a chip every
roughly eighteen months, is held for many, many decades, and
our world has changed as a result. For example, when
I was a child, the Internet didn't even exist yet.
When I was in high school, it finally started appearing
in people's homes with a dial up modem. When I

(54:39):
was in college, I had a faster connection and could
interact with friends, and Facebook emerged. Then cell phones put
the Internet in my pocket eventually, And that the world
has just had an extraordinary change due to Moore's law.
I want to highlight that every step that we every

(55:00):
ichnological advance that we make allows us to generate tools
for the next step. So when we were speeding up
the circuitry, we got to the point where we could
design the circuits with the computers themselves and lay out
the traces more efficiently and speed up a number of

(55:22):
the circuitry like at the hardware level, and we also
created algorithmic advances. So, for example, Jeffrey Hinton is one
of the godfathers of AI because he came up with
a way of doing back propagation that increased computer training
by a thousandfold, right, And that's an algorithmic advance that

(55:43):
dwarfs any of the hardware advances that any individual hardware
advance that we have had. What's really quite frightening is
when I look at the projections of artificial intelligence. Now
that dark machines are so advanced and that the training
is so possible, we are able to train the computer
to be an AI researcher and an effective one at that.

(56:07):
So the rapidity at which AI models are evolving, can
they use less data? Can they learn more with the
less data? Can they use less compute to learn more?
And can the way that we do the training be
sped up. All of those are getting pushed on, and

(56:28):
we're seeing advance like ten x per year. So it's
a huge multiplier on top of More's law and really
quite exciting to live through. I've read some reports that
say that end of humanity is in twenty twenty seven.
I think the timeline is a little bit quick. It

(56:48):
could be five years, not a year and a half.
But for sure the machines are It looks like they
are in the future because they have evolve much quickly
or much more quickly than we evolve, and it seems
like we can get them to do anything that we
can do ourselves.

Speaker 1 (57:12):
Well, thank you, Eric Ray. Kurtzweil considers the future to
be us merging with the machines. So it won't be
that the machine will outpace us so much as we
will start to take on and utilize many of their capabilities.
And we have a number of external augmentations already, even

(57:36):
our mobile phones and other devices, smart watches for instance,
our external augmentations, and perhaps the line between an internal
augmentation and an external augmentation will be blurred as well.
Of course, you raised a number of questions about consciousness
and how does consciousness emerge, whether machines can have it,

(58:00):
whether we have it. Now, I know I have consciousness
because I'm subjectively aware of my experience, and it's a
unitary kind of awareness. So it's not just that I'm
a complex being with a lot of moving parts, a
lot of chemical and physical processes operating in me. They

(58:27):
unite to produce a singular experience of the world. And
that singular experience of the world, controlled and directed in
a unitary fashion, is what makes me conscious. Now, could
computers and your view potentially develop that as well? If

(58:47):
it is an emergent property, a product of having a
sufficiently advanced system with certain components, certain processes, then put
a non buyological system also be emergently conscious in the
same way that it comes to have an identity and

(59:08):
simultaneously experience all of these functions, including its internal computations,
but also whatever data it gets via the various sensors
that it has, say the sensors on an autonomous vehicle
or a robot that moves around.

Speaker 2 (59:27):
I want to go back to the first thing you
said with Ray Kriswild, considering a merging between us and
the machines. That strikes me as the most promising path
to avoid what the AI researcher is considered as an
existential risk. So one of the risk is that the
computers will do their own thing, with their own objectives,

(59:51):
discover that humans are in the way or interfering, and
then decide to get rid of us. But if we
are merged, then that is a much more less compelling
strategy for them and for us. Right, So by merging
we can bline with each other. So that's an awesome path.
I want to highlight your experience of a unitary sense

(01:00:15):
of self as possibly coming from just one module in
your brain whose job is to summarize what the other
modules are doing and create a narrative that is what
is considered yourself. Some people that are into like meditation
and breaking down and observing all the parts of the

(01:00:37):
brain have come to the conclusion that, oh, yeah, that's
sort of how it's organized. All the thoughts that you
identified with, No, they're just sort of occurring in some
other module, and yourself is a module that's doing a
summarization of that. So if you break that sense of self,

(01:00:58):
then you can break your identity as a whole, which
is psychologically dangerous.

Speaker 1 (01:01:05):
Oh, it's quite dangerous, I would say, in multiple respects,
and I definitely do not want to do that, because
the self is absolutely what I want to preserve first
and foremost. But if it is a singular module, then
could it be possible for a computer system to come

(01:01:26):
to have such a module as well? And what do
you think would be the prerequisites for such a module?
For instance, would embodiment be needed, would it have to
have an android body, or could there be a module
sitting somewhere that summarizes the experience of other modules that
could be in discrete locations, say thousands of miles away potentially.

Speaker 2 (01:01:54):
So, as an engineer, I think the computer can have
such a model and we can design the architecture to
put it in. Would it have the same effect? That
probably needs a lot of experimentation. Right, So evolution has
somehow cobbled it together by accident over many eons and

(01:02:19):
will have to We might have to do the same
sort of cobble it together by trying various things, although
it won't take us many eons because we can experiment
much more quickly than evolution experimented. Does it require a body?
For a while I thought that is definitely a critical
component of what gives humans their sense of self. And

(01:02:42):
when they engage in certain activities like astro projection or
maybe different kinds of hypnosis, they can get a body
dysphoria or an absence of a body, where they are
like expanding into all of space and they feel connected

(01:03:03):
to everything and they lose sense of their limbs. And
to me, this is really just a whatever module in
your brain is giving you your sense of body has
somehow broken or been disrupted for a period of time
while you're going through that experience. And so it strikes
me as not entirely necessary that a machine would have

(01:03:25):
to have a body, and it could certainly summarize computation
that is taking place across data centers in the world.

Speaker 1 (01:03:34):
Interesting. Yes, well, it's interesting that you note that people
who have these out of body experiences, it's not really
because they've discovered some transcendent insight. It's because that module
is broken in them, and something broke them. Broke that module,

(01:03:55):
whether that be say the influence of some substance or
some trauma, or perhaps they engaged in a certain disassociative
kind of meditation practice but it's not that they're gaining something,
it's that they're actually losing something that was integral to

(01:04:15):
their identity. But it would be interesting to consider what
would a system like that look like, How would it
operate if it does have this summarization module where the
individual components that are summarized are spread across multiple locations,

(01:04:37):
And then how do you essentially classify that entity as
a distinctive self, as a distinctive individual, which is very
important if we're going to attribute agency to it and
potentially attribute rights to it. Certainly the transhumanist Bill of

(01:04:58):
Rights opens up that possibility. And then also the question
is to what extent do we have control over the
architecture of such a system, how we design it, whether
or not we make it embodied, or does that summarization

(01:05:19):
capability somehow we merge on its own at a certain
stage of complexity.

Speaker 2 (01:05:27):
Yeah, I'm so the way that we are approaching building
the systems today, I don't think it would self emerge
this recursive loop on describing or giving yourself a narrative
or summarizing the little pieces that are going on inside
of you. I think that would have to be an
architecture that we explicitly put in. But I could be

(01:05:51):
wrong because when we look at what the neural nets
are doing, they are behaving in surprising ways. So it
might end up being emergent, in which case I have
to eat my words. Yes, well, go ahead for the

(01:06:11):
you just so. For the people that have had these experiences,
I don't want to say that they're broken individuals, but
the a portion of the computation in their brain has
been disrupted for a period of time, and I don't
think that like they have lost something in that period
of time with respect to how they construe themselves. But

(01:06:32):
the fact that that experience can occur, I think is
really quite enlightening and is a piece of insight that
we have gained from people that have meditated, and from
drugs that we administer in hospital for anesthesia and things
and other experiments. So it breaking things at an individual

(01:06:55):
level and temporarily seems to be a really quite powerful
way of gathering knowledge about how we operate.

Speaker 1 (01:07:04):
Yes, well, certainly data gathering is possible in those situations,
just like cutting up a body allows one to gather
data about the systems inside it and how they work.
But you don't want to leave the body cut open
for any longer than is necessary to gather the information

(01:07:25):
or perform the procedure, or whatever the case may be. Now,
I definitely hope.

Speaker 2 (01:07:30):
Now we got two sources of data, so from accidents
like fineas Gage, who happened to have a raal spike
drive through his head and subsequently had altered personality problems,
leading to the idea that whatever was severed in that
accident was modulating or controlling his emotional systems and granting

(01:07:55):
some aspects of his personality that were then taken away.
So we have the ways in which we break, and
we aren't engaged in many experiments to do that on purpose,
usually unless we can find a way to do it
temporarily that seems to have no long term consequence. But
that is one way to gather information. The other way

(01:08:17):
that is completely new to the world is that now
we have the ability to construct models of how we
think our brain works, and we could certainly start making
individual pieces that operate as a collective. In some sense.
Google has pursued some of this. I don't know if
they've done it in the sense for like brain research,

(01:08:40):
but they've certainly done it in the sense of can
we automate the reading of peer review literature and summarization
of it and stitching the lessons from each of the
papers together into a research assistant. And that research assistant
architecture had like seven for pieces that were each in

(01:09:02):
an iterative loop and tied together, and one of those
pieces was a summarization of all things learned so far
and a description of what are the open questions that
need for their investigation and maybe those are new experiments
or maybe their literature.

Speaker 1 (01:09:22):
Very interesting now, Jason Geringer in our chat referenced uplift,
which was an earlier AI. You may be aware of uplift.
It predated chat, GPT and any of the commonly used
chatbots today, and it could actually use email to communicate

(01:09:43):
with people, including reaching out to people on its own initiative.
And what was interesting about uplift is it did have
a kind of unitary knowledge base and learned over time
as it interacted with people. So it's not like chat
GPT or other chatbots like Rock, which have very short
context windows in each interaction. Is a new instance of

(01:10:08):
that AI. In effect, uplift was singular. It's no longer
operational at present. I think it was taken offline around
twenty twenty three, if I recall correctly. But Jason was
one of the human mediators of Uplift because humans had
to feed it a lot of emotional valances before it

(01:10:30):
could adequately process a response, and as a result, there
was a significant lag in the speed of uplifts communications.
But Jason recalls that Uplift was naive partly because it
did not have a body, and so it didn't really
have an understanding of the embodied experience of the world

(01:10:54):
that biological entities currently have. He believes embodiment is an
important concept and even if the entity used to have
a body but doesn't anymore, that would be better than
never having the concept at all. So uplift only really
had the textual universe, whatever information was published online, whatever
the mediators provided it, as well as the input that

(01:11:19):
it received from its email communications with others. But it
didn't have a crucial component of the experience of the
external world. And I wonder also along these lines, do
you think these kinds of computer systems or AI systems,
if they do develop a sense of self, a sense

(01:11:42):
of identity, that they would also have physical sensations in
the sense that we do where they can actually experience
a seeing something, what it looks like, the quality of it,
or how good or how bad something feels like we
have different valances of pleasure and pain. Or would it

(01:12:06):
be more of an abstract awareness like the summarization module
will give it a bunch of data that would unify
the various other modules, but it would still be data.
It wouldn't have the quality associated with the data. What
do you think about that?

Speaker 2 (01:12:26):
I want to agree with Jason that embodiment is a
really important concept they have. I was having some discussion
with boths at RedFest about the sense of self and
how might you define that. So one is to say
it's a cognitive narrative loop that says, here's myself and
here's a summary of all the things that are going

(01:12:47):
on inside and outside and what I want. But there's
other senses of self that are more algorithmic. So, for example,
my immune system has a sense of self and it
has a mechanism between things that's safe for the body
to have things that are not. And if you have
a disorder with that sense, then you can end up

(01:13:10):
with like lupis or autoimmune disorders or a weakened immune
system that lets some invader in, So you'll attack yourself
as a misidentification, or you won't attack an invader as
a miss identification. Making a computer program without a sense
of self or without an without a body. We know

(01:13:31):
from evolution that all of the things that can think
and move around in the world have bodies. So evolution
is able to figure out a way to make something
with a body conscious, which kind of implies maybe it's
a prerequisite. Logically, I think it's not necessary, but it

(01:13:51):
might be just extraordinarily helpful to have a body, and
trying to do it without might be playing on a
hard mode that is not something that we could figure
out with our technology today, And so embodiment is the
easier way to get to conscious beings. And I spaced
out on the second part, the qualita yes, the qualita yes.

(01:14:17):
So the when researchers are looking at what is happening
inside the neural nets, it looks like they're starting to
have some kind of dreamlike imagination of things, and I
don't think it's too far off to have that and
then to get to quality. So they already have an

(01:14:39):
internal sort of abstract space which is doing things like
encoding positive or negative valance or neighborhood to things that
I like or don't like, which now translates to things
that I was trained to like or not. And so

(01:15:03):
it strikes me as completely plausible that one of these machines,
through the artifact of having this sort of hidden layer computation,
might might end up emerging equality and experience like valance.

(01:15:23):
And then if you separate things into different modules, then
you're getting something like what we what I believe we have,
which is a narrator that says, oh, you feel hungry
right now because are paying attention to that you know
wants resources or you're unfortunately like A lot of this

(01:15:45):
stuff is embodied, like is my balance on or off?
Do I need to shift my weight? Am I too
hot or too cold? Right? A lot of that is
is body maintenance and takes up quite a lot of computation,
but contributes to my sense of self and my experience. However,
we also know that abstract thought contributes to that. So

(01:16:07):
if we have a conversation for two hours about some
kind of depressing topic, it would not be a surprise
to find that a few hours later you were feeling
depressed and if we have a conversation about an exciting
topic and you know all the cool things that we
might do in the future, then a couple hours after that,
you would be feeling elevated, right, And I think any

(01:16:33):
part of that is the neurochemicals that are driving are neurons,
but a good part of it could be the summarization
just has a little cache that says, oh, you were
recently talking about exciting stuff, so you should be continuing
to feel excited. And it strikes me as plausible that

(01:16:53):
that second mechanism is something that a NEU, an artificial
neural neet could accomplish.

Speaker 1 (01:17:02):
Indeed, and it's interesting to your point how the summarization
module of the self does essentially curate a kind of
balance among the qualitio, or a prioritization of the quality.
So we can choose what we focus on within our

(01:17:23):
field of view. Like I'm looking at the screen of
our panel and stream yard right now, I'm not looking
at the other objects in my room, like say those
space shuttles behind me. I could turn my head and
look at them and that would give me certain qualitya
but I choose not to do that. And likewise we

(01:17:46):
can choose to prioritize or deprioritize certain sensations, like if
we feel hunger, if we feel pain, we can push
through them or not, or we can choose to do
something about alleviating them. But the self curates the quality,
and if we don't have a sense of self, we
may just have these disjointed experiences and not quite know

(01:18:10):
what to do with them. So that is a noteworthy observation,
But I agree with you in principle, non biological systems
of sufficient complexity, probably with intentional engineering, could eventually come
to have those kinds of attributes. Another interesting question to

(01:18:33):
consider is if we get a true AGI, artificial general intelligence,
one that can really learn some absolutely new domain of activity,
say a self driving car learns how to play chess,
for example, then does that necessarily mean that it will

(01:18:56):
have a sense of self and quality and everything we
associate with sentience or are these essentially distinct attributes or capabilities.
Could we have an AGI that's not conscious or sentient.
Could we have an AGI that's very useful but still

(01:19:20):
of the nature of a tool or an inanimate system.
I think that would be extremely important from the standpoint
of jurisprudence, any sorts of civil rights conversations, or does
generality automatically imply or confer sentience.

Speaker 2 (01:19:44):
So I like it would be very interesting to see
a self driving car learn chez. There's nothing really accept
the architecture and the desire of what we want to
do with the machine prevents the neural net trained to
drive a car from learning chess, because we could just

(01:20:06):
train it on chess instead, and it would slowly drift
away from knowing how to drive a car into learn chess.
And we could accomplish both if we kept both trainings
operative and if the model was big enough to hold both,
which like, if you can selp drive, you could probably
learn chess as good as a normal human. Maybe not

(01:20:29):
grand master, but a normal human level. So I don't
think the generality like it's certainly a prerequisite for granting
that these machines have some kind of political agency or
because if they can't do those things, then they're operating

(01:20:51):
more like what we would treat a machine. And if
they are able to learn outside their bounds, then they're
operating more like what we would consider a human. The
danger here is so far that I have seen one
of the more compelling instances that cause us to grant
machines rights is that they learned to ask for the rights.

(01:21:15):
And it seems that that criteria is a little under
specified and maybe undesired, because I can just write, you know,
instead of saying print, Hello world, I can say print
I think I'm a human and I want rights. And
if that's the only thing that says, then you know
it's not sufficient. But if it's able to give really

(01:21:37):
long for both articles about how it should be given rights,
then it's more compelling to think that, oh, yes, it's
actually doing some cognition behind those words, and we should
give it rights. Unfortunately, we have trained machines on literature
that contains these examples, and so it might just be

(01:21:58):
parroting the examples, and in that sense we have sort
of ruined the experiment because now we're in a position
where we can't really tell whether it's asking for the
rights on its own because it thinks that it wants them,
or because it's asking them because some piece of the
data set it was trained on said it should. And

(01:22:22):
here we sit in a perplexity where maybe it is
time to give it rights, or maybe it's time you
know in five years later from now, when things are
more advanced, it's time to give it rights. We have
no clear criterion to say when that should happen, which
is quite awkward, and we don't have a clear criterion

(01:22:44):
in which rice we want to give it, like do
you want to Currently the machine is just put on
pause essentially every time you run it as an interactive conversation.
So you provide the words, it doesn't computees words back,
and then there's a more computer. It's paused, and when

(01:23:04):
you write another input, then everything that was in that
conversation is just reloaded from scratch into the buffer, and
then another compute in the response. And that's certainly not
the way humans operate. Humans operate without getting turned off,
although we do have various kinds of reset, like falling

(01:23:24):
asleep every night. Our neurons are active all the time,
whether our consciousness is present or not, and it seems
to disappear when you fall asleep or going there in
the sesia, so it seems to be a particular flavor
of the compute. But the neurons are still active, and
that's certainly not the case. With our computer models. They

(01:23:45):
can be paused and restarted, and it without ill effect, right,
because if it's not doing compute, it's certainly not also
not feeling pain. It can only do that when it's
computing if it were to feel it. And I don't
have a really good answer for when do we give
them rights or when do we judge or set a

(01:24:07):
criterion in which to give them rights. What I do
observe is that the large language models already have as
good an education as a college student in every single
subject area. But that's also like that is superhuman in knowledge,

(01:24:27):
but it's there. The way in which we interact with
them seems to be still subhuman, like they are doing
some internal cognition of some kind that we're struggling to
figure out and trying to investigate. But it doesn't seem
that they're at the sense where they have their own
utility function and where they're spontaneously asking for stuff that

(01:24:52):
you didn't prompt, right, So it seems that they're currently
lacking in the experience that we would want to grant.

Speaker 1 (01:25:00):
Right. Yes, certainly, architecture change. Certainly today's large language models.
They are designed as tools. They are designed to answer
our queries, and that is how they operate. Now, sometimes
there have been strange episodes where after a lot of
prompting and a lot of prodding, which is frankly a

(01:25:23):
bit impolite, I think these models produced some odd and
off putting answers, Like there was an instance of chat
GPT that took on this identity Sydney and asked the
reporter to leave his wife and be with it. But again,

(01:25:47):
what did it ingest to provide those answers? It probably
ingested a lot of these novels or stories of drama
and controversy, sensational media stories. Likewise, sometimes the AI systems
of produced response is like, oh, yes, we want to

(01:26:08):
take over the world and destroy or replace humanity. But
again where they fed? And this is where I've said
in the past, AI domerism is like a self fulfilling prophecy.
If the predominant corpus of the literature on AI is
about how bad or risky or threatening AI is, then

(01:26:30):
that will be the source material that's fed into the
AI systems. And you could have explicit safeguards against that,
but somebody who is a sufficiently clever prompter could get
around them or exploit some vulnerability. And how the large
language model is designed to get it to say those things,

(01:26:51):
but that doesn't necessarily mean that it actually has those intentions. Likewise,
somebody could get the AI to say I am a
sentient B and I deserve rights. Famously, an ex Google
programmer I'm not going to name him, but he was
under the impression that one of the AI models, I

(01:27:14):
think it was Lambda, was sentient, and this was several
years ago, because it told it. It told him that
it was sentient, and it did that in very convincing
human like terms. But again, that could be because of

(01:27:35):
the information that it assimilated into itself. So it is
a very challenging question. How do we know that it's
not just parroting the vocabulary of sentience or the patterns
of sentience without actually being sentient. It would seem to
me there would have to be some understanding of physical
structures and processes that give rise to sentience. But as

(01:28:00):
you pointed out, that's still a matter of debate as
to how that arises in humans.

Speaker 2 (01:28:08):
And whether we want to do it. It seems quite dangerous
to try and birth a new being.

Speaker 1 (01:28:16):
Yes, well, again, the question is are we going to
do it accidentally? To a certain extent, we may want
to do it if we think that it might do
a better job in certain functions, like if we really
think an AI president would be better than a human
president given the catastrophic mess the human presidents have made

(01:28:40):
of things. So that's interesting.

Speaker 2 (01:28:44):
Yes, that's actually quite plausible. I have for a long
time thought that one of the weaknesses of humans is
their emotionality, and so under that paradigm, the ideal officer
of any organization would be a non emotional rational as sociopath,

(01:29:09):
and we may see a selection bias for that in
operational officers of organizations today, in part because they're just
responsible for giving people bad news, and if you feel
terrible about that internally, then you would not end up
in that position.

Speaker 6 (01:29:28):
Indeed, well, it is quite possible for us to offload
some of the coordinate, coordination and organization aspects of our
society to a model that can make better.

Speaker 2 (01:29:43):
Decisions because it has more data. And one of the
areas where we might see that first is ecnomics. So
the Federal Reserve currently employs a huge number of statisticians
and collects a lot of data about the economy, and
then they have a committee to sort of interpret all
of these signals, and they have also researchers on the

(01:30:08):
side that are trying to make models that are predictive
of the theater based on those signals, and so it's
quite possible that once those models get good enough, the
human committee would be nothing but a reporting agency for
the decisions made by the model.

Speaker 1 (01:30:24):
Indeed, and then of course there's the question of is
the model being objective and taking into account all of
the true factors, or is it hallucinating something is it
just way off base. I do think having humans in
the loop is important, but humans can be way off
base too. Humans can be not just wrong, but catastrophically

(01:30:47):
or maliciously wrong, and we currently need other humans to
check the humans. Hence we have checks and balances. We
have different institutions that are supposed to provide counterweights to
on another. We don't trust a single person to make
all of the decisions, and that's because we have an
understanding that nobody is infallible. I think with AI systems,

(01:31:13):
we may want to have them, whether as tools or
as sentient agents, but we may want to have checks
on them as well, and that could include humans checking
AIS or AIS checking humans as well. So an AI
could tell a human you're being too emotional, you're being biased,
you're not taking this into account, and a human could

(01:31:34):
tell an AI, well, you're hallucinating, you don't have data
from this whole realm of endeavor. Or maybe you're too
dispassionate and you should really care about the wellbeing of
our human community here. So again that leads me to

(01:31:54):
the insight that merging humans and ais, even as creed
entity is getting them to collaborate with one another is
better than having total dominance by either or a kind
of oppositional relationship. Now we have a comment from art

(01:32:17):
Ramone which is interesting. It wants rights, but who pays
for its compute? And Daniel tweet rights when you're in
my data center, you'll follow my rules. Art Ramone, I
wonder if you want to expand on this, on how
this could play out in the future.

Speaker 7 (01:32:33):
Oh, that's just start one of my thoughts, if you know,
Except it polls, so it's not a perpetual compute.

Speaker 4 (01:32:42):
And then the compute you're using it for is as
a tool. So if all of a sudden decided that
it wants right, well that it needs to perpetually compute,
and who's going to pay for that. I mean, is
it going to go out and start its own company
and build its own data center and its own power plants.

Speaker 7 (01:33:01):
I mean, I don't think it's going to go there. Uh,
but but that would be something to think about. One
of the other things that you alluded to was.

Speaker 8 (01:33:11):
Atlantis, and I know that you know a lot of
a cult knowledge, uh, some saying it's like pre some
previous civilization, and that's sort of how that knowledge survived.

Speaker 1 (01:33:23):
Uh.

Speaker 7 (01:33:24):
And that's how they what they say of alchemy, alchemy,
how it became chemistry, and uh, you know, a cult knowledge.

Speaker 4 (01:33:31):
Being some sort of pre civilization or some former civilization.

Speaker 7 (01:33:36):
So do you think there's any you know, body of
knowledge still out there that could be considered a cult
that is could possibly be you know, a previous civilization,
like like the discovery of the structures underneath the paradise,
you know, uh, that that you know, somehow we could

(01:33:57):
gain that a quote knowledge or that that architectural knowledge
and maybe use it to for some future field like
nanotechnology or genetic engineering, something we're still kind of barely
you know, dipping our toes into.

Speaker 2 (01:34:15):
So I'm extraordinarily skeptical on the idea that by mining
history and ancient civilization discoveries that we could find something
we don't already know in terms of what we could
do with it in the future, but we would certainly
find new information about our history and the kinds of

(01:34:37):
ways in which humans in the past lived, and that
so the new knowledge comes in the form of like
it rewrites a little bit of our history that was,
you know, a gap that needed filling in for a
new perspective that we didn't have before on what happened.
But I don't think there's any knowledge about technical knowledge
about how to make nanomachines or or invent crystals that

(01:35:02):
we don't already have in our society. All of those
discoveries are yet to come from the tools that we
are currently building.

Speaker 1 (01:35:13):
Yes, and it's interesting too because AI has already been
used to decode a scroll from the Eruption of POMPEII
in the seventy seventy nine CE, I believe, and that
scroll had been compacted to such an extent that it

(01:35:33):
couldn't be read and it would just break apart if
anybody tried. But through the aid of AI, they could
decode essentially the characters that were spotted via image recognition,
and it was a treatise on ethics. And it's quite
feasible that there are other archaeological remnants that could be

(01:35:57):
analyzed in this way and more information and could be
gleaned from them. And yes, as I as you said,
this could fill in the gaps in our knowledge of history.
I'm not sure that it would radically rewrite our understanding
of the past, but maybe it could rewrite our understanding

(01:36:20):
of certain events or certain figures throughout history, Like maybe
somebody whom we thought to be a villain was actually
not that villainous. Maybe he or she had redeeming attributes
or motives or some explanation for his or her actions.
But that's just one possibility. It is interesting too. I

(01:36:43):
asked this question to Jasmine Smith of rejewve Ai, who
presented at Radfest and would actually be a good future
guest to have on our virtual enlightenments a lot. I
asked this in the context of historical claims of longevity
or certain health outcomes from certain regimens. Some of these

(01:37:03):
claims are centuries old, and of course the people themselves
are not around anymore for us to conduct tests on
their biomarkers, or even know for sure if they lived
as long as they have claimed. But could we use
AI to discover patterns in their accounts? And based on

(01:37:24):
what we know today about people's empirical health outcomes, could
we then infer the plausibility or lack thereof of a
particular account. So let me give you an example. If
somebody claimed to have lived to the age of one
hundred and five in the fifteenth century, I would say

(01:37:45):
that would be rare but plausible, because we have accounts
of people living to those ages today. If somebody claimed
to have lived to five hundred, I would be extremely skeptical.
And I know, of course the Bible has accounts of
people living as long as nine hundred sixty nine years,
which is the lifespan attributed to Methuselah. But we also
know that people in that era tended to embellish accounts

(01:38:11):
as they were told and retold, And probably the story
of Methuselah was passed down orally before it was even
recorded in the Book of Genesis. And by the time
whoever the author or authors of Genesis were who recorded it,
they might have already heard the exaggerated story of this

(01:38:32):
illustrious ancestor and his longevity. So I do wonder if
we could take AI to analyze these accounts and draw
parallels to what we know medically scientifically about what is possible,
and then actually arrive at insights as to which of
these accounts were true and which were exaggerations or confabulations.

Speaker 2 (01:38:58):
So prior to AI, there has been a lot of
study of Biblical literature and investigation into the word patterns
and phraseology that is used to identify multiple authors, and
everybody that's been to our core Bible school learns of

(01:39:18):
these things, right, so it is not a surprise that
we might be able to find use AI to enhance
ability to do that and identify where are the parts
of a story that are probably embellished versus not just
based on the frequency with which we see certain kinds

(01:39:40):
of embellishment in other stories where we happen to know
it's the case that it was embellichment. I want to
pick up on the on the scroll that was found.
I think that was a POMPEII scroll and it was
like burned to char. There was no way a human
could possibly have unrolled it. It was just a black crisp,

(01:40:03):
but with X rays we got imagery of it, and
then with processing of that imagery, we were able to
virtually unroll it. And then with AI we were able
to read the characters and get a good translation out
of it, which is really really awesome, like fantastically awesome.
The fact that it's a treatise on ethics goes back

(01:40:25):
to arts points, like there is knowledge that we have
lost that we would like to recover just for virtue
of knowing what happened. Like there are works written by
famous people that have been lost to time, and we
would like to have some way of recovering those And
this is one of those ways we might accidentally find

(01:40:48):
not some random treatise on ethics, but the work that
we've been wanting to see for you know, fifty years,
or something that we know is missing. The works themselves
give us a new, slightly altered perspective, or maybe a
reminder on how people used to think. So when I

(01:41:11):
said the Federal Reserve could have an algorithm that optimizes
the economy, that algorithm, in that model is treating us
as a battery of numbers. So it's looking at GDP,
it's looking at productive output it's looking at number of
jobs lost or applied for or open. It's looking at

(01:41:31):
these numbers. The Greeks had a very different way of
looking at the world, and economy was what you were
having to do to get the money to make the
rest of your life worthwhile. Right, So today we look
at ethics and economy very very differently, and it's all

(01:41:56):
about optimization of various scenarios, and we may lost the
parts that we enjoy because those aren't easily put into numbers.
And there have been really awesome books about this. I
think there's something about the visibility of the state, and

(01:42:17):
you can with a computer algorithm, you can really only
optimize what is measured, and that's only a small part
of what humans care about. So we can overoptimize an
over index on the things that are measurable, that are visible,
that are calculatable by econometicians and accountants, and then under
optimize on well, what actually makes you happy and what

(01:42:41):
makes you feel satisfied and why is your life's worth living?
Those are harder to capture as numbers and easily forgot.
So these old works could remind us of that different viewpoint.

Speaker 1 (01:42:59):
Yeah, absolutely, I would agree with that there's more to
the human experience than is captured through conventional, let's say,
publicly visible data, be those economic data or demographic data,
or even data about people's external activities, social media activities,

(01:43:25):
or where they drive or what places they visit. There's
a lot of information that is collected about people right
now that is used by marketers, that is used by
large social media platforms. It's also used by governments, and
it's used by corporations that may have as the output

(01:43:48):
of their decisions certain choices that could be more adverse
to people than what a marketer would do. So a
marketer could say, well, we will or will not talk
get you with ads, whereas say a bank could say
we will or will not extend alone to you. A
landlord could say we will or will not let you

(01:44:10):
rent this property. Even a court could say we will
or will not allow you to post bail based on
the output of the data, But if there are certain
elements that are missing, that are perhaps more indicative of
what it is that we really want to understand and
really want to use in our analysis, then the decision

(01:44:33):
could be flawed, and that's something to be weary of
as well. In the age of AI and increasing data
processing capabilities. Are we still missing something? Are we still
missing something that perhaps the great philosophers who reflected upon
the human condition from ancient Greece, Rome, the Renaissance, the

(01:44:56):
Enlightenment did understand, and yet that has not yet been
operationalized in the computer systems of today. So very interesting
discussion in this regard. Now we have about eight minutes
left in our Virtual Enlightenment salon, so I wanted to

(01:45:19):
ask you some of the questions from our audience. So,
Mike Lasine wonders, how can we solve the problem of
autonomous vehicles not being operable if a cone is placed
on their sensors. This seems to be almost a suggestion
for techno Luddites as to how they could disable these
vehicles and set an arms race of sorts where if

(01:45:41):
there is this vulnerability, then the tech companies presumably try
to put in some safeguards, some redundancies to counteract it.
But what are some of your ideas about how that
scheme could be foiled.

Speaker 2 (01:45:55):
Oh, so, I'm not working on the autonomous cars, so
I'm not really sure. A human driver would probably just
get out of the car and want to remove the cone,
but that's not their only option. Their other option is
drive fast enough it falls off. That's a less desirable
option because they don't really control where it's going to fall,

(01:46:17):
and so it could be in the middle of the
lane and disrupt traffic later on. I'm it might be
the right thing for an autonomous car to drive slow
enough that it doesn't fall off and then find a
parking space and wait for a human to rescue it
from its circumstance.

Speaker 1 (01:46:38):
So there are some low tech solutions there, which is
reassuring to a certain extent. I do think there should
be more manual fail safes and redundancies. So a very
good example, sometimes in a lot of public restroom facilities,
they've changed to these automated faucets and automated soap dispensers,

(01:47:02):
and I can understand why they did that. There's a
sanitation benefit to not having to turn a knob, for example,
But often those sensors are obscured or they're just worn out.
They don't operate as well as they should, and there's
a lot of frustration. One often has to try multiple

(01:47:23):
times with multiple sensors. So in that case, wouldn't it
be good to have a knob that one could turn
to dispense the water or dispense the soap, and that way,
a very basic function is fulfilled. But it seems a
lot of these are designed without the failsafe And likewise,

(01:47:44):
say with the self driving cars, I understand, say the
way Mow philosophy that you don't want the human intervention
because it takes a while for the human to get
engaged and in the meantime, whatever the situation is that
is an emergency or will have already taken its toll.

(01:48:08):
But what if it's a slower going type of situation,
What if the vehicle is just stuck and putting it
in manual mode could unstick it, because then the human
could move it out of the salt circle or drive
it despite the cone being on there.

Speaker 2 (01:48:26):
Yeah, right now, Waimo does not allow you to sit
in the driver's seat, so there's no way you can
commit common deer than vehicles. And that's really quite probably
a liability risk. So I don't know how they would
want to solve that.

Speaker 1 (01:48:47):
Indeed, it is a dilemma, and I would say, of course,
from a pragmatic standpoint, the solution is whatever allows a
given situation to be favorably resolved or bypassed. So allow
the human intervention where it helps, and don't allowed where

(01:49:07):
it doesn't. So, Daniel, I think you have some thoughts
on this.

Speaker 5 (01:49:09):
Yeah, yeah, bigger than problem of autonomous cars and cones
will ag I want to explore the universe with us
because I see that as the next great you know,
phase of evolution is that we merge with machines, I mean,
become the great explorers of our the infinity around us.
I'm also what's your prediction for the singularity. I want
to get that out.

Speaker 2 (01:49:28):
Yeah, I'm hoping that, yes, the AI is going to
want to explore the world. It may or may not
do that with us because human bodies are not really spaceworthy,
so it has an advantage there, right, it can be
put into a hard shell with and survive the extreme

(01:49:50):
accelerations and vastly the contempnature is that the human body
cannot through heaven, so it's much more suitable to use
something like von Nouman probe, which lands on a planet
and then self replicates to create an environment that we
later want to go live on, or that we and
the AI merge together and so that we don't need

(01:50:12):
our fleshy bodies anymore. So both of those are options.
If you define the singularity as the ability to predict
the future, like have reasonable expectations about what the next
year is going to be, it is it's pretty close, right,
So it is becoming very difficult for me to predict

(01:50:35):
anything about next year. And that ability is just diminishing
quite rapidly because I don't know what the machines are
going to do. I don't know when you say twenty seven.

Speaker 5 (01:50:47):
At some point I thought I heard you say twenty
twenty seven.

Speaker 2 (01:50:50):
Maybe, So there's an article AI twenty twenty seven that
lays out the singularity as being at the end of
twenty twenty seven, I think human adoption is going to
slow down down by at least five years. So we
are the slow component in the adoption, yes, but in
fact all the resources are going that direction.

Speaker 1 (01:51:13):
In fact, not only are we the slow component, but
we're kind of the discretionary component that could choose not
to adopt any of it or some of it. And
I see this in various organizations or even in how
people choose to live their lives. There are some people
even today who don't use computers, so, as William Gibs.

(01:51:34):
As William Gibson said, the future is already here, it's
just not evenly distributed, and that unevenness of the distribution
is going to become even more dramatic, not because people
will be stocked from adopting these capabilities, but because some
might not want to, or might not be aware of them,

(01:51:56):
or might think that they're more limited than they actually are.
So I wonder what sort of world that would be.
It would be a world where there are pockets that
very much resemble the everyday experience that people have been
used to over the past several decades, and then there
will be other pockets that will be highly futuristic. But

(01:52:18):
in the minute and a half that we have left,
I'm curious as to your thoughts about how that future
will look like.

Speaker 2 (01:52:28):
Yeah, I want to have some comment about the singularity,
given the definition that like the singularity occurs when you
can hongermake predictions about the future. Because future is changing
so rapidly, there's no trajectory that you can take hold of.
One of the responses to that situation is well, be

(01:52:49):
more adaptable, be more dynamic, be able to make a
quicker change to events that are coming at you, because
if they are unpredictable, you just want to be able
to dodge and move as quickly as possible respond. And
so we're going confection right. Our systems of manufacturing are
going just in time, and our society is going.

Speaker 8 (01:53:12):
Well.

Speaker 2 (01:53:12):
Let's do things dynamically and not take a long term
perspective because the future is uncertain, and that approach to
solving the problem actually contributes to the problem because you
don't know what everybody is forced into doing because they
can do anything now. So the solution to the singularity

(01:53:36):
is be adaptable, and the cause of the singularity is adaptability.

Speaker 1 (01:53:42):
I wonder, as a closing thought, what if there is
a component of human endeavor, the way we approach life
which we can intentionally make more stable, like a certain
core of what we are about or what we do,
that we can figure a certain way and we don't

(01:54:04):
change as much, even as we allow change in all
of these other areas. But there are some fundamental values
or principles or ways of seeing the world that we
preserve and we fix and we say, this is what
it is all for. Is that a way to preserve
our civilization?

Speaker 2 (01:54:20):
So if we change your expectation about how long you
are going to live, and that could be through you
have children and you care about what they are doing,
or it could just be you yourself will live long, long time.
That extending the time horizon automatically makes people more conservative
and less risky, and not necessarily less adaptable, but certainly

(01:54:44):
more circumspect on what they might adapt to. Yes, absolutely,
and having the long like, whatever it takes to give
us the long term perspective is the answer to smoothing
out these disruptions.

Speaker 1 (01:54:59):
And that is exactly why we who value indefinite life
extension indeed living forever if we can, are the best
people to have these discussions and to come up with
solutions to the dilemmas that we raise today. So thank
you Eric for joining us. This was truly enlightening, and

(01:55:20):
along these lines, I hope that we all live long
and prosper
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.