Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Now here's a highlight from Coast to Coast AM on iHeartRadio.
Speaker 2 (00:05):
As I mentioned, we've been hearing about AI and the
emergence of AI for a lot of years now, but
suddenly it's like a front burner issue. In Washington at
the G seven event, the Builder Burgers are talking about it.
It's being debated in the labs of Silicon Valley, and
it's been coming a long time, but suddenly it's front
burner and Brian Rummilly has been one of the people
who's responsible for that. Brian, I can't believe this is
(00:26):
the first time you've been on coast, but we're so
glad to have you here.
Speaker 3 (00:29):
Well, George, what an honor, just a pleasure to be
here and love this subject and I think we have
so much to cover so were doing. It is definitely
top of the top of the mind for people right now.
Speaker 2 (00:43):
I've seen tweets of yours. I love your Twitter feed.
It covers so much ground on a whole bunch of
different topics. But it seems like I remember seeing one
that refers to you diving into AI issues in the
late seventies.
Speaker 1 (00:57):
What were you were?
Speaker 2 (00:57):
Thirteen years old at the time.
Speaker 3 (00:59):
Yeah, I was a young guy and actually found some
early NASA open source everything. NASA theoretically was open source
software called CLIPS, which was really an expert system, and
I was able to get it to run on PCs
(01:23):
and just fell in love with the concept of an
early form of AI, and back then getting it to
answer questions very rudimentary was quite interesting. Nowhere near what
we're capable of doing today, but it allowed me to
sort of ride that wave of different technologies that had
(01:44):
come and some left, you know, some just more dead ends.
And now we're in what are called large language models,
which is a very interesting way that we got to
this place in artificial intelligence.
Speaker 2 (02:00):
As I've shared with you privately, I'm kind of a
techno knucklehead. I'm not very tech savvy, so I'm going
to ask questions that are probably pretty basic for a
guy with your knowledge of this. But I think our
audience is probably where I am a lot of them anyway,
and concerned about the scary headlines that we all see
about AI. And you know, we've all seen the Terminator movies.
(02:22):
We know what Skynet means, and so I wanted to
get a sense from you where things are and where
they're headed. And let's start with sort of the positive
spin on this. Can you give me a sense of
because you do have a positive view of AI and
where it's going, you give me a sense of what
changes are likely because of AI and its emergence, what
(02:43):
it will mean for individuals.
Speaker 3 (02:45):
First of all, well, I'll take the furthest view to
start with, and I call this the undiscovered territory or
terra incognita in Latin. The AI that we have today
is generating emergent capabilities that even the scientists that have
(03:11):
been putting this together and building it in their labs
and now releasing it are rather shocked about some of
the capabilities. And we can talk about that. They call
it AI hallucinations. But in a lot of ways, I
call it AI creativity, not human creativity, but it's creativity
in some form. Where it is at this point is
(03:34):
sort of a contradiction of two different polar opposites. Most
of AI that we're using right now is in the cloud.
It's not on your device. It's called out on a
big server. Your questions, which are called prompts, are sent
to this large server. It runs on very large computers.
(03:57):
There are specialty processors otherwise known as graphics processors that
happen to do very well on the mathematics necessary to
make these things happen by building models that spector math
and by actually responding to these prompts. And then there
(04:18):
is the local models, which are just recently have come
out literally we're talking about the last two months this
has taken place, and the local models are privately held
on your computer, have a corpus of data which is
sort of like a low resolution capability of the entire
(04:43):
data set that all of these large language models were
trained on. So we're basically saying most of human knowledge
that was sort of slurped up from the Internet to
build these models. How we were able to get it
down to for gigabytes to a gigabytes of hard disk
space and run locally is still a rather deep debate
(05:08):
on how that was achieved, because we were told that
it was going to be terabytes petabytes of data to
be able to have the outputs that we're getting from
local models. And my belief is local models empower individuals
because they can develop a relationship with their local AI
ultimately train it so that it can become familiar with
(05:32):
that person's outlook, their goals, their missions, their beliefs, and
maybe even guidance to some certain level as these become
more like personal assistants and agents. And in my view,
the more it knows about you, the more it has
to be secured, encrypted, and never touching the Internet. And
(05:54):
that's sort of anesthetical in the direction where most of
the Internet has gone. It's all based on this idea
of cloud, so we're kind of going in opposite directions.
But the open source community, the very same community that
built most of what the Internet is running on. In fact,
almost all the major operating systems are built on a
(06:16):
Linux slash Unix sort of basis that was open source,
and the open source community is building this personal AI.
And instead of having maybe one hundred people at a company,
tens and thousands of people are taking these models and
(06:38):
building them to such a level that almost daily we're
seeing landmarks that are being set and milestones that are
being broken through. So most of the AI that I'm
very positive about is the AI that we will hold
in our hands at some point, and that's rapidly approaching.
It's going to break all models of advertising. It's going
(07:00):
to break all models of how we would you know,
exist with a device, because once you're training an AI model,
this would be the most personal thing that you would
never want to be separate from. You know, there's a
lot of good with that. Of course, even that has
some bad. But you know, this type of I call
it intelligence amplification. I took AI and reversed it because ultimately,
(07:25):
that's what I think we're seeing in these models is
human intelligence amplified and reflected back to us in this
grand mirror. So that's that's the two different types of
AI that I see out there, And we can go
down each path if you'd like.
Speaker 2 (07:41):
Sure, Well, I mean, for I, Okay, let's say I
developed my own individual AI, and it's it's like my
other brain that I can you know, it does stuff
for me? What does it do? How does it help
me in my daily life?
Speaker 3 (07:54):
So let's let's go from the theoretical. In the perfect world,
So in the perfect world, you were born with a
camera and microphone, as bizarre as it sounds, sort of
on your shoulder, and it follows you for your entire life,
and it records every book you've ever read, every movie
you've ever seen, every song you've ever heard every conversation
(08:16):
you've ever had again with permission and maybe a social
contract to understand that there's not a recording device to
play back, but a recording device to catalog your life,
to database your life and every interaction, everything that you've
ever experienced is in this device. So not only A
(08:37):
is it memory. B it's the ability for you to
interact with the context of your interactions. So it will
start making connections between different books that you've read that
you would have maybe have made on your own, but
maybe you wouldn't and you would have a conversation with it.
(08:57):
I'm a proponent of what I call a voice first
or voice AI ultimately, because I believe that's the ultimate
interface to have a conversation. Like we are, the human
brain is designed to have conversation. In fact, for us
to write something, we have to translate our inner monologue.
All of us have a voice in our head. When
(09:18):
we're reading a piece of paper, we're actually hearing a
voice in our head, which may or may not be
your own. We can get into that sort of research
that's very interesting because we're going to start having to
answer some of these questions about what is human language?
Where did it really come from and what's AI doing
with it? But basically it's following you your entire life.
(09:42):
That's the theory, and that becomes your intelligence amplifier. Now
the other side of this is your wisdom keeper. And
what's that the sum total of all of your knowledge
after you've passed on, Where does that go? Who do
you give it to? And what value does it have?
Is it you in an AI? Is it some kind
(10:05):
of sentient life? I don't think so. But would be
able to ask answer a question of what would George
Knap say about this? Perhaps very much like Chad Gpt
does about answering question of what maybe Albert Einstein would
say about this new discovery in physics. It'll be pulling
(10:28):
ideas and let's say, resonances of what your essence was
and sort of drumming up a response. And this is
sort of what Pierre Tiladar D. Shardan came up with
the nurse sphere and the Omega point, and what Chardane
(10:52):
basically predicted back in the early nineteen hundreds he was
a Jesuit priest. He said that the geosphere, which represents
inanimate matter, is a first sphere, the biosphere, which is
the biology of the that we're living under right now
(11:16):
and on Earth.
Speaker 1 (11:17):
And then the new.
Speaker 3 (11:18):
Sphere, which is a human thought, human wisdom, human knowledge connection.
So if you were to pass and you left your
wisdom keeper available to be connected to others, you now
have a new sphere. You now have this ability to
have the essence of different individuals connect. Now, obviously you
(11:40):
don't want certain private bits of information out there, things
that you might find embarrassing and that sort of things,
and that can be not only edited out, it could
be in a sense deleted, but the sum total of
your knowledge may be retained. And it's a sort of
modern library of Alexandria that may this time wouldn't get destroyed.
(12:02):
So that's the big picture of it. What we can
do now that we're in different phases of life, Well,
you have a local AI that reads all of your emails,
that everything that's on your hard drive, any context it
can get from you and start forming ideas and opinions. Now, again,
the idea is to hold this down in a blockchain
(12:24):
very similar to bitcoin, make it encrypted, very difficult to
get access to even if you have all of the
passwords and things of that nature. So it is completely
private and safe, but of course perhaps still hackable. Everything
electronic is, but you're taking great measures and at that
(12:47):
point you can have a dialogue with your context. The
things that you've experienced and the insights that you get
from that are quite phenomenal. It's sort of research I've
been doing to that level for over a decade. And
the intelligence amplification you get from this is something that
(13:07):
I would want every human being who wants it to
have it. I think it will give you armor to
protect you from the world that we're in and entering in,
and the ability to sort through the information overload that
every person in the modern world is facing at this point.
(13:29):
And it's just scheduled to go up logarithmically as AI
starts generating so much content. The bad thought AI will
get to generating so much content, so much noise, that
you're going to need not only to train yourself to
higher and higher levels of discernment, but you're going to
(13:50):
need an intelligence amplifier just to be able to wave
through the world. So that's where it's kind of going,
and that's the premise of it.
Speaker 2 (13:59):
So much of what has happened in the last what
seems like three or four months this explosion chat GPT.
It's a novelty. People view it as a novelty. They're
asking philosophical questions, try to stump stump AI or you know,
occasionally get an answer that's useful, or playing games. You know,
something that you had predicted a long time ago that
(14:20):
people would use it to create music. Yeah, I've seen.
I've heard some Beatles songs that are pretty interesting over
the last couple of months that sound a heck of
a lot like the Beatles singing songs that they'd never recorded.
It's are you encouraged by that.
Speaker 1 (14:35):
Part of it?
Speaker 3 (14:37):
I am? I am really bifurcated by this whole scenario.
I really believe that you have to have the ownership
of yourself, and I think we have to have a
mature conversation not just among politicians and legal scholars, but
(15:00):
across basically everybody. Do you own you? Do you own
your DNA? Do you own your intestinal flora? Do you
own your likeness, your voice, the creative output of your mind.
If somebody ever were to be able to read your mind,
is that your property? Right? I think we really need
(15:21):
to have that discussion much sooner than any of the
discussions that are taking place about AI as Terrible and
the Matrix and things like that. That's that's number one.
If that is in fact a case where we do
have ownership, because if you don't own you, somebody else does.
As as simple as that. And I think whenever this
(15:44):
is a public discussion, that should be the period at
the end of the sentence that you know any negotiation
is coming down to that. And I think we know
where that leads to. So I think that debate can
be settled very quickly. So now when it comes to create,
it works. If an artist owns their likeness and the
(16:06):
style of their music and they are being it's being
held out that it's here is a song the Beatles
didn't do. I think we really need to sort of
give that ownership back to the artist and whoever is
owning their works at that point, if they've passed, certainly
(16:27):
their family here or something at that level, I think
we're in wild West City right now. So that side,
I think we really have to go down that road
as a culture and as a society and have a
mature conversation. But as far as the creativity, it is fascinating.
Speaker 1 (16:45):
Listen to more. Coast to Coast AM every weeknight at
one am Eastern and go to Coast to coastam dot
com for more