All Episodes

November 30, 2023 62 mins

Today is the anniversary of Open AI’s launch of Chat GPT, a tool which brought AI out of the realm of sci-fi and right to our fingertips. AI seems to have crept into every facet of our lives in that one year, and it’s hard to know if that’s a good or bad thing–especially in light of the chaos wrought by Open AI’s recent firing and rehiring of their co-founder Sam Altman.

 

Sometimes it feels like the battle lines are drawn–you can be for or against AI–and the stakes are high. So in this episode of Next Question, Katie is joined by her plus one, Vivian Schiller, in conversation with data scientists and AI ambassadors Chris Wiggins and Vilas Dhar, to sort through some of the noise. 

 

The panel covers a lot of ground, but remains grounded in real-world examples (and there are several acronyms defined!), to rationally consider what AI can and should do for us now, what risks we should keep an eye on, and who needs to be involved in the conversation shaping AI’s next chapter.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Cancer Straight Talk is a podcast for Memorial Sloan Kettering
Cancer Center. We're host doctor Diane Reedy. Lagunis has intimate
conversations with patients and experts about topics like dating and sex,
exercise and diet, the power of gratitude, and more. I
love being her guest back in April. Listen to Cancer

(00:21):
Straight Talk. You'll learn so much. Hi everyone, I'm Kitty
Kuric and this is next question. So I have to
confess friends that if I sound weird, it's because I
have a terrible cold. So I apologize in advance. Luckily,

(00:46):
my plus one isn't sitting next to me catching my germs,
but she is at a remote location. Where are you, Vivian.

Speaker 2 (00:55):
I'm in Bethesta, Maryland, in my home.

Speaker 1 (00:56):
Oh nice, Well, Vivian Schiller, as you probably heard, is
my plus one today. And Vivian, I thought i'd start
by telling folks how we know each other? Do you
want to start?

Speaker 2 (01:09):
Well? Actually, I think we knew each other when you
were at CNN, but you would not remember me. Well,
you knew my husband?

Speaker 1 (01:17):
Yes? Was this in Atlanta?

Speaker 2 (01:19):
This is in Atlanta? You worked with my husband.

Speaker 1 (01:21):
In the early days of CNN. Was that at Take two.

Speaker 2 (01:24):
In the mid eighties, So I think that's how we
initially met.

Speaker 1 (01:27):
Vivian and I have cross paths at various times in
our lives. I think Vivian is the only person I
know who has worked at more news organizations than I have.
Vivian give us The Rundown.

Speaker 2 (01:41):
CNN, New York Times, NPR, NBC, The Guardian well as
a board member, and also at Twitter doing a news
role there, and at Discovery running a news documentary channel.

Speaker 1 (01:55):
So basically, like me, Vivian cannot hold a job. So
we are going to have a conversation today that actually
was prompted by a conversation I heard at an Aspen
Institute board meeting. Vivian and I are both very involved
in the Aspen Institute. In fact, she's got a paying
job there. Vivian, what exactly is your role at Aspen?

Speaker 2 (02:18):
I run a program at the Aspens too, called Aspen Digital,
and our focus is on all things technology and media
and their impact on society, so exactly all the stuff
we're talking about today.

Speaker 1 (02:28):
And Vivian and I got to know each other even
better when we both served on the Aspen Institute Commission
on Disinformation, which has a more formal title, which is
go Ahead.

Speaker 2 (02:39):
Vivian Commission on Information Disorder.

Speaker 1 (02:41):
Thank you, Yes, And we got to know each other
well during that time, but we've known each other for
a long time, so we are excited to have this
conversation for all of you. I learned a great deal
and we have two incredible experts who are coming on
to talk about not all what AI is, but obviously

(03:03):
the promise and the perils of this new technology. And
of course we're going to touch on Sam Altman's auster
and then reinstatement by the board at Open AI, which
is full of all sorts of intrigue. And this is
hopefully a podcast that is AI for dummies, but I
fear the only dummy in the conversation is yours truly.

(03:26):
So without further ado, let's invite in our guests, Chris
Wiggins and the Last Star. Welcome to the podcast. Thank
you so much for being here. I should note that
November thirtieth today is the one year anniversary of chat GPT,
so we actually have a newspeg for this podcast. And

(03:46):
before we dig in, I thought i'd ask you briefly
about what you all do and why you're qualified to
have this conversation Chris, Why don't we start with you?

Speaker 3 (03:56):
Sure?

Speaker 4 (03:56):
Fair question. So for the last twenty two years have
been on the fact ficulty. At Columbia, I teach applied mathematics.
My research is in machine learning, mostly apply to biology.
For the last ten years, I've also been the Chief
Data Scientist to The New York Times, which means I
lead a team that develops and deployees machine learning.

Speaker 3 (04:12):
And I'm a lost star. Twenty odd years ago, I
started my career as a computer science researcher, working on
what we called artificial intuligence back then. Since then, I've
spent my career building private and public organizations that focus
on using tech as a way to advance justice and equity,
and now lead probably one of the largest film propic
organizations focused on funding AI that makes the world a

(04:32):
better place.

Speaker 1 (04:33):
Well, I'm very excited to have you both, as well
as my friend Vivian, who you both know well. And
I thought i'd start with a very basic question, what
is AI? Who wants to take a shot at that?

Speaker 4 (04:47):
I could try a historical view. AI is one of
my favorite drifting targets meeting It's a term that means
different things to different people in different decades, in different communities.
So when the term was coined in nineteen fifty five
by John McCarthy, it was a proposal that the idea
that any feature of intelligence can be, in principle, be
so precisely described that a machine can be made to

(05:09):
simulate it. So the conception of what artificial intelligence meant
in nineteen fifty five is so different than what it's
come to me now even in twenty twenty one, let
alone a year ago today when chat TPT was launched,
And now everybody when they think of AI, they're thinking
of a chatbot, which is really chatbot is a small
example of machine learning, which is a small example of

(05:32):
artificial intelligence. So the term has come to mean different
things in different times, which is why the term never
feels like you're standing on solid ground when you're saying it,
because different audiences can mean very different things. When you
say those two letters.

Speaker 3 (05:44):
You know, I'll agree with that. I agree with everything
Chris said. I mean, on one side, AI is a
technology conversation. It's a new set of tools that let
computers do what people have tradisally thought only we could do.
But it's also something much bigger. It's a social phenomenon.
It's a moment now where we get to test and
examine some pretty basic assumptions about what it means to

(06:04):
have an economy, a political society, about what it means
to be human. And that's why we're seeing this amazing
grounds full of interest in what AI is.

Speaker 1 (06:13):
When you think about AI, can you explain in very
sort of eighth grade terms, how it works, how these
large language models are assembled, and how machine learning enables
technology to spew out things that make sense. Chris, can

(06:33):
you help me with that?

Speaker 4 (06:34):
Sure? I think again. History is really useful here. One
example of how you might build a chatbot statistically was
Claude Shannon in probably nineteen forty four was thinking about
this model where you generate words at random. Imagine that
you're reading a book and you find some word, and
then you keep reading in that book until you find
that word again, and then write down the word that

(06:55):
follows it. Then keep reading the book, wait until you
find that word again and write down the word that
that's the basic nexus of a small language model. So
you're predicting the next word based on the previous word
you can think about what's being done today as a
very large version of that same small language model. It's
a statistical prediction model. And an important part there is

(07:16):
that it really matters what book you're training it on,
and so you need a very large corpus of training data.
In this case. One of the things that makes large
language models possible is the vast amount of information that's
available online, and so computer programs automatically ingest all of
the text on the web could be from Reddit, newsgroups
or Wikipedia, or hey, they.

Speaker 1 (07:37):
Use my book the best advice I ever got for
chat GPT, and nobody asked my permission. By the way,
that's right.

Speaker 2 (07:43):
That's a whole other issue.

Speaker 4 (07:45):
That's exactly right. That's all other issues is how this
relate to the rights of the authors. But the statistical
problem is one of training from data. So the data
are central and it's counterintuitive, I think to many people
who think that computers are about writing down rules, and
when you write down the rules about how we think
we think, then you'll get something that acts like how

(08:05):
we think we think. And in fact, for most of
artificial intelligence as a field, for the last seventy years,
that's how people thought we were going to achieve artificial
intelligence was by understanding how we think we think, and
then you would just simulate it or just program it.
And the truth is, it's been a realization in the
last two decades that the way that we are able
to achieve such exciting results is from taking really large

(08:26):
data sets and learning from the data how to build
a computer that emulates, really imitates what we sound like
when we are intelligent.

Speaker 1 (08:34):
In words, sometimes when I'm writing emails, these words like
so much. I must use that all the time, thank
you so much. It shows up in my email if
I want to just kind of press a button and
not write anymore. Is that an example a rudimentary example
of AI.

Speaker 4 (08:53):
Yes, very much.

Speaker 2 (08:54):
So.

Speaker 4 (08:54):
There's the math behind it, which is how are you
going to predict the next word? But the other thing
about it is the product and sort of the user interface.
People like to talk about how in the late fifties
night at Stanford there was John McCarthy who was working
on the mathematics of AI, and then there were people
like Doug Engelbart who were working on the product of AI.
How are we going to make an interface that allows

(09:16):
people to interact with the computer. Well, so when you
just saw it, there was a good example of good math.
And the math could be as simple as counting the
number of times that the word what follows the word,
So it's very simple math. But as well as the
product idea, which is, how do I make a suggestion
to you in a way that's useful to you while
you use that digital product and not creepy and not intrusive.

(09:36):
So yeah, that's another thing that we're seeing with chatchept
is a good coming together of technology and mathematics and
statistical models, but also just a nice product that people
are enjoying musing.

Speaker 2 (09:48):
What you're describing, Chris, is a predictive model. But so
many people, particularly since a year ago today when chatchipt
came out and sort of collectively blew the world's minds,
it feels like we're talking to a machine that is
actually thinking, that is actually sentient, and it's in fact
design that way, and that has societal implications, some of

(10:09):
the societal implications that you were referring to earlier.

Speaker 3 (10:11):
VELAs know, you're pointing out the critical kind of missing
element in what Chris described as what AI is today.
At the end of the day, every version of AI
that we have today. It's a mathematical model that predicts
what happens next based on what's happened before. It doesn't reason,
it doesn't think, it doesn't have agency, it doesn't have preferences,

(10:32):
all of the things that people now try to scare
us with that. I imagine we'll talk a little bit
about that crazy term AGI. Today's AI is none of that.
I often like to say, do you all remember that
movie Honey, I Stroked the Kids?

Speaker 1 (10:44):
Yeah, Rick Moranez, Yeah.

Speaker 3 (10:46):
Great movie. Right, Today's AI. All it is is this,
take everything that's ever been written, put it in a
giant library, beam a rate gun at that library, and
enough power to power a small city for months and months,
and say, how do we compress that entire library down?
And give us one little map? And all the math
does is it says this. If before, when people said
a word, they often followed it with another word, that's

(11:08):
all we're going to do right now. Now, what that does.
It's an amazing magic trick. It's a great illusion. It
makes you think you're talking to somebody who wants to
talk back to you. But at the end of the day,
all the machine is doing is predicting what the next
word and the answer should be. This is so critical
because it reframes how we engage with these tools, and
that's really all they are. They're just tools. They're not,

(11:29):
you know, all knowing entities. They're not partners, they're not
conversational and sparring buddies. They're just tools that help us
maybe be better.

Speaker 1 (11:38):
Let me ask you this because I thought this was interesting.
Bill Gates recently noted that AI as it exists today
is quote still pretty dumb. Chris. Do you agree with that?

Speaker 2 (11:50):
Yeah?

Speaker 4 (11:51):
Absolutely so. I think what VELAs is saying is apt,
which is language, and the ability to produce language is
a great imitation of what thinking. And in fact, I
use the word imitation because in Hellan Turing's original nineteen
fifty paper on can MA Machines think he basically set
out this problem. Imagine a computer that could imitate what
it's like to talk to somebody. That's sort of an

(12:13):
operaginalization of what it means to think. But there's still
many things a I can't do. As often described, planning
is difficult, Compositional thinking is difficult, Working with multiple modes
at once is difficult. Meaning like words and images together.
So I think you're right that it's uncanny. Right, we're
in the uncanny valley of conversations right now with chatbots.

(12:34):
But it's very difficult for people not to impose this
belief that it is intelligent or thoughtful. And the truth
is people have been having that experience for as long
as they've been building chatbots. Even in the nineteen sixties,
people were building chatbots based on simple rules, and users
using that chatbot had the same experience of feeling like
even though they knew it was just a very simple

(12:54):
computer program, there was the emotional resonance was one as
though you were talking to an intelligent agent.

Speaker 1 (12:59):
Hear this word sentient a lot, which of course is
capable of sensing or feeling conscious of, or responsive to
the sensations of seeing, hearing, feeling, tasting, or smelling sentient beings,
which is really what it means to be a human.
Does AI have the capacity to be sentient?

Speaker 4 (13:20):
I think what we've shown is that it does a
great imitation of it. But I think it's important for
us all to remember that it is, as Vela said,
just math right. It is a mathematical model that spits
out words and it's optimized for generating words that sound
like what a human being would say and given the
same prompt. But we should remember that it is a
machine and it's executing a mathematical act that we trained

(13:41):
it on.

Speaker 2 (13:42):
But yet there's a lot of experts out there and
researchers and some pretty serious people who are trying to
warn us that these machines may become sentient. Now, is
that just a matter of seeing too many sci fi
movies or is that something that is possible obviously not today,
but on the horizon.

Speaker 4 (14:01):
I think there's a couple of reasons why people are
warning us of that possibility. I mean, again, part of
it is wordplay, and I think that's what Alan during
was getting at in nineteen fifty when he said, can
machines think? Is an ill posed problem, so let's try
to operationalize it. But I think the warnings are often
distractions from real problems that automated inequality and other downsides

(14:21):
of using algorithms today are causing. In the here and now.
It's sometimes very difficult to think about our existing challenges
in sociotechnical systems, and somehow more pleasant to think about
this terminator doomsday future which is not cured yet. I
think also there's.

Speaker 1 (14:36):
A yet yet that's scary, that's right.

Speaker 4 (14:39):
I think there's also a concern that people are putting
forward this idea of a doomsday because there's only a
small number of companies at present which are able to
afford amassing lots of data and producing really good products.
And often these companies are saying, we are the ones
who can tell you how to regulate this. So there's
concern that some of the doomsday scenario might be coupled
to re trey capture or getting ahead of potential regulation

(15:02):
both domestically and internationally.

Speaker 3 (15:05):
Let's zoom out a little bit. I agree with what
Chris said, but maybe and you spoke to experts and
serious people who are trying to scare us all, and
I just want to put us in context. There are,
you know, eight billion people on the planet. There are
maybe a few hundred thousand that really understand how AI works.
And there's maybe on the order of a few thousand
people who have decided that what they care about more

(15:27):
than anything else is the existential risk to humanity. Let's
just put that in context. There are billions of people
on the planet today that could use what AI offers
as a promise to fundamentally change what their lives look like.
They're access to economic opportunity, to any number of other things.
There are hundreds of thousands or millions of people who
work in companies that could fundamentally change their relationship with customers.

(15:49):
So what do we have. We have a small set
of people with direct access to some of these tools,
and we'll talk about how that came about shortly, who
have come up with this common idea that the thing
we should care about more than any else is that
AI will kill us all. And in the meantime, we're
living in a world where there are so many opportunities
and challenges here and now that we should be spending
our time thinking about.

Speaker 2 (16:09):
So there are the here and now, there is the
long term future, and there's that whole middle ground that
I think many of us haven't addressed. But before we
come back to that, just to help our listeners a
little bit, we hear the term AGI artificial general intelligence,
not to be confused with generative artificial intelligence. Very confusing

(16:30):
even for people that pay attention to these things. AGI
is this sort of robot overlord saying we're talking about
that may or may not ever happen. Is that right?

Speaker 1 (16:38):
Yeah? What is that? You guys help me out. I'm
the dumb one here.

Speaker 4 (16:42):
I think one thing that's useful is to remember the
two different g's in those two acronyms. The GAI is generitive,
but AGI is general. So when people talk about AGI
for general intelligence, part of which is exciting is the
idea that in the last fifty years we've done species,
not me personally, but the human being species have done

(17:02):
a really good job building algorithms that are good for
individual tasks. Like you can build an app that can
take a picture and say does this have a dog
face in it? Or a cat face in it? That's
a specific use of statistical modeling, which is good for
that specific use case. So the dream of AGI is
that you can produce one algorithm, one machine, one model
that's good not only for disinbiguating cat faces from dog faces,

(17:25):
but also composing a sonnet or enjoying strawberries and cream,
or whatever general problem you would like the machine to solve.
So that's the g of agis general. It's very easy
for us to make machine learning models that are good
for one specific task. It's much harder to make a
machine learning model that's general and is able to do
anything that we consider an intelligent task.

Speaker 1 (17:46):
So where are we in terms of And I didn't
really understand that explanation, Chris, can you try it again
in a more like I'm not a Columbia student, or
just pretend like I'm in sixth grade? Help me out, sure,
help me out.

Speaker 4 (18:02):
So for many decades, we've been able to build specific
machine learning models. So a specific algorithm that can tell
the difference between a picture of a dog and a
picture of a cat, say, that's a specific problem. And
we've been very good for decades at building algorithms that
can do very specific and focused tasks. One of the

(18:24):
things that we've seen with chatbots that are trained on
a wide variety of documents is that you can have
a plausible conversation with a chatbot about a wide variety
of topics. So if you've trained a chatbot only on
chemistry textbooks, you will have a great conversation about chemistry
and not about any other subject. But by training a
chatbot on a wide variety of topics chemistry, philosophy, and

(18:48):
all points in between, people are experiencing this shock that
you can interact with an AI, meaning an algorithm that
works not only solving a specific problem, solving a general problem,
in this case, the general problem of having having an
intelligent sounding conversation about a general breath of topics.

Speaker 1 (19:10):
After a quick break, we'll be back with my co
pilot and plus one Vivian Shiller, talking to Chris Wiggins
and velost Star. If you want to get smarter every
morning with a breakdown of the news and fascinating takes
on health and wellness and pop culture, sign up for
our daily newsletter, Wake Up Call by going to Katiecuric

(19:31):
dot com. We're back with Chris Wiggins and velost Star,
along with my plus one Vivian Schiller. Have you guys
used chat GPT or bard. Have you tried to have
it write speeches for you or come up with any
kind of documents. I'm sure you've tested it, Vivian, what

(19:54):
has your experience been like.

Speaker 2 (19:56):
I've used chat gpt to develop an itinerary. I took
it to Japan. I knew I needed I had some
time between two places I needed to be, and I
had certain things that I was interested in, certain things
I was less interested in, didn't know how long it
took to get paced the place, and actually chat Gipt
gave me an amazing itinerary, so it was very useful.

Speaker 1 (20:16):
Travel agents probably don't like that, how about you, guys?

Speaker 3 (20:19):
Yeah, you know, I mean, I've used every LM out there,
and so I'm like, I'm no longer, you know. I
wish I could say it was at the emergent frontier
of AI, but I'm no longer Now I have a
different role. But I spent a lot of time with
the smartest people that are working on this stuff, and
I've used them all. I've used them to do really
basic and pedantic things like oh, give me some talking points.
I no longer do that, having tried it a few
times and realizing how bad it is. I spent a

(20:40):
lot of time using these generative AI tools my nieces
and nephews. I'm doing really fun things like saying, hey,
let's come up with married a scene and let's see
if we can get an EI to draw it for us,
and then ask the question, hey, is this kind of
what you pictured in your mind? Side? How do we
make it better? And we actually iterate with genitive AI
to create new artworks or even basic things sometimes like Hey,

(21:00):
I want to tell you a bedtime story, what do
you want it to be about? And then we work
with an AI to kind of come up with a
nice little Kate. Look, these are all really fun, But again,
I want to make sure that we understand that we're
kind of missing the point a little bit, right. These
tools that have changed our lives, and Vivian said, have
done so really in an amazing way, but not because
the technology has already changed our lives, but because it's

(21:21):
opened our eyes to what's possible when these tools get
to be really amazing. We talk all the time about
how these tools have hallucinations, right, the idea that you
might ask it a question and it doesn't check whether
the answer is real or not. It just kind of
spews some language back at you and you say, okay,
well that sounds reasonable and you move on. The tools
that we have today aren't products yet. There's still kind

(21:42):
of the very early days of what generative AI will
look like. And my hope is when we start training
these models on medical data that includes all of the
kind of published medical literature, we'll get to a much
better sense of what a generative AI can do to
help the doctor diagnose it. But at the end of
the day, I can't imagine a world in which we say,
the genitor of AIS we have today are directly diagnosing

(22:03):
a patient. The only thing they can do is help
a doctor or a medical professional whose trained use it
as an input into their process to figure out what's
going on. And that's the moment that we're stuck in
right now, because I know so many of us want
to jump into a future where we say that AIS
are going to do everything for us, but we're really
in a moment where we're saying, the only way this
works is that the AIS support human decision makers. They

(22:25):
can use what the technology gives them, but their own experience,
their lived wisdom. They're you know, working with patients for
hour many years to actually make a decision.

Speaker 2 (22:33):
You know.

Speaker 1 (22:34):
I tried to get chat ept to write a poem
for my husband's birthday and it was very, honestly not
very good. I gave it information about my husband, but
it was quite pedantic and not very clever. It was
sort of honestly Hallmark CARDI quality. And I think it's

(22:54):
because it didn't have the breadth of knowledge about him
that I do so so it couldn't really compete with that,
but it was fun to try it. And another example,
when I interviewed Carl Rove at the Aspen Ideas Festival,
I was trying to come up with a fun title
for the conversation. I asked chat GPT and it came

(23:16):
up with a great title, which was the Elephant in
the Room because it was on the future of the
Republican Party. And I was like, that is genius. So,
you know, I think you're right what you were saying,
Velss about it being helpful but not determinative. And one
example is, you know, I'm very into cancer screening and

(23:37):
some of the things that they're going to be able
to do that is beyond the ability of a human
to see things, is to take these massive data sets
and look at scans and figure out actually predict if
someone may or may not get breast cancer in the
next five years. I mean, that really blows my mind.

(24:01):
But that obviously has to be done in conjunction with
an experienced medical professional. Right, So is that what you
mean veloc by kind of being an aid?

Speaker 3 (24:10):
It is, And let me add a little bit of nuance, o, Katie.
I mean, you've been such a courageous kind of leader
on this topic. When we started looking at breast cancer
in particular with AI through our lens as a civil
society institution, we learned about this fundamental problem that's just
going to kind of blow your hairback. We have all
these algorithms today that have been trained to do exactly
what you described to take a mammogram or a scan

(24:33):
and say hey, can we do early prediction of cancer risk?
But all of these tools we learned very quickly have
been trained on global north populations. They've been trained on
American data and European data, and so when an organization
like Instituto Protea, which is a partner of ours, took
these to Brazil and tried to use them on low
cost machines that were already recent settings, they found the

(24:53):
algorithms didn't work at all. So even in that aspirational
moment that you've created this idea that we might have
this massive break through, we come back to a very
human kind of fundamental problem that until we train this
data on ways that are representative about all of the
people in the world, not just those who have privileged
access to Western medicine, right, we're never going to realize

(25:13):
the promise. You talked about.

Speaker 1 (25:15):
How biased is AI? How biased are these large language models,
because I remember doing a documentary on our tech addiction,
and this was just starting to be talked about, and
I think this was like in twenty eighteen, Chris, do
you see this as a major problem that it doesn't
really represent people like so much in society?

Speaker 4 (25:38):
I mean, the problem is always how something is used
or interpreted. I would say in the context of medical
usage of AI, there's additional challenges around responsibility or attribution
and decision making. So I think for all of these tools,
they're going through this very inefficient part of our hype cycle.
So in a hype cycle, there's a moment where you
discover a technology and you have this moment of irrationally

(26:00):
zuprints and you think it's going to be great, and
then you have some trough of despair as you realize
it's actually not that good about generating a poem about
your husband in your case, And then we get to
some efficient place where we all have an understanding of
what these technologies can do and cannot do. So I
do think we all need to limit our trust in
all these technologies in in that'stry for technology in general.
But I think Veloso is making a good, great point,

(26:21):
which is form machine learning in general, which again is
the strategy that actually works for artificial intelligence. Where you
train an algorithm on lots of data, it is extremely biased,
and this is that it's well suited to the data
st you have, and there are many complex problems in
the world where when you train it on one data set,
it will not generalize to some other very different data set.

(26:41):
And the different data set could be you've trained a
language model in chemistry and then you try to test
it on poetry, or it could be that you've trained
it on genetic information from one demographic group and then
you realize it says nothing about, say, predicting phenotype from
genotype for a different demographic group. That is a real problem.
It often undergoes the name of buis, but in the
case of machine learning, it's built into the system. If

(27:04):
you train it on one data set, you're going to
have a bad time if you try to use it
on a very different data set.

Speaker 2 (27:09):
Let me follow up with that to both the Loss
and Chris, which is how much of that has to
do with the people who are selecting the data sets,
who are creating the technology, who are deploying technology, most
of whom are in Silicon Valley. Are they maybe in
a few centralized companies. How much of that is an issue?

(27:31):
And how do we get out of that jam?

Speaker 3 (27:32):
Yeah, Vivian, I'm going to take that. I'm going to
go one step bigger. I'm going to give you an
example for it. We talk a lot about you've probably
heard about hiring algorithms, about how companies are using AI
to screen resumes about who they want to hire. And
there's a story that's been well told there about the
fact that these algorithms are often biased, they often pick
men over women particular types of technical competency. That's one story,

(27:54):
and we get it. But there's a bigger story here
that we have a really hard time engaging with those algori.
We're trained on twenty years of data about how human
recruiters picked candidates, and yet we never talk about the
fact that for twenty years we've lived in a world
where our own recruiters are showing these biases day in
and day out. The question we should be asking is

(28:15):
not why is the algorithm biased? It's why is a
society have we been so okay for twenty years with
this set of outcomes, and now that we have a
tool that shows us just how bias we've been, we're
not having a public conversation about restructuring our entire hiring
mechanism across the private sect. This is just one analog
of a lot of things like this that I think
are emerging across the board, where AI, because of the

(28:37):
bias in the algorithm, is putting a spotlight on the
bias in our human behavior. We should be using AI
as an investigative tool, as a magnifying glass that lets
us look at all kinds of decisions and say, how
do we build a more just and equitable society. Let's
have a conversation with a bias in AI. We absolutely should,
and the answer to that, we kind of know what
the answer is, right. More representative data, presentive talent that

(29:01):
designs these algorithms, making sure there's public compute that allows
these people to develop products. But let's take the bigger
picture here. This is what we're going into over the
next twenty years is a world in which these tools
demonstrate to us why we're okay with the society we've built,
and let us question if we actually want to make
some fundamental changes in them.

Speaker 1 (29:20):
But maybe AI can be an instrument for change for losso.
I mean, you know that is such a massive undertaking
to uproot bias in society. I mean, it's so baked in,
so maybe this is one entry way to address it.

Speaker 3 (29:36):
Absolutely, It's one of the things I'm most optimistic about
right is when we look at things like we're going
to have a conversation I'm sure here about some of
the recent developments in the AI world, one of which
has just been the continued silencing of women underrepresented characters
in building these tools. I'm deeply optimistic about the fact
that we could invest in creating a new capacity to

(29:56):
build AI that's really representative a lot of those problems
would one we signed a spotlight on them, and two
we'd very quickly move to fix them.

Speaker 1 (30:04):
Well, when we come back, we're going to talk about
how do you regulate artificial intelligence, What in the world
is going on with Sam Altman and open AI, and
how quickly is this technology going to evolve. That's right
after this, I want to tell you all about the

(30:28):
Cancer Straight Talk podcast for Memorial Sloan Cattering Cancer Center
with MSK oncologist doctor Diane Reedy Lagunis. I was a
guest and we had a totally candid conversation about my
family's experiences with cancer, including my husband's illness, my own
treatment for breast cancer, and of course that time I
got a colonoscopy. On TV, Cancer straight Talk features life

(30:51):
affirming conversations with experts and patients alike about topics affecting
everyone touched by cancer. If that includes you, I hope
you'll listen into my episode and every episode of Cancer
Straight Talk. We're back with Chris Wiggins and Velos star

(31:11):
along with my plus one Vivian Shiller. Chris is the
Chief Data Science of The New York Times, Associate Professor
of Applied Mathematics and Systems Biology at Columbia, and he
wrote the book How Data Happened, A History from the
Age of Reason to the Age of Algorithms, which frankly
I read in two days, just kidding. Chris the Loss

(31:35):
is President and trustee of the Patrick J. McGovern Foundation,
which focuses on AI and data solutions. And my plus
one today is my good friend Vivian Schiller, who has
worked in many media organizations and has really dug into
AI and technology, media and society. So you gave me

(31:55):
the perfect segue the loss in our last conversation before
the break, and that was what is happening right now
in various technology companies. So, Chris and velos, who wants
to kind of explain this Sam Altman drama which is
being watched with baited breath by everyone in technology and

(32:17):
I think in media right now. Chris, you want to
give it a shot, I.

Speaker 4 (32:21):
Can try, with the warning that you know, we're all
outside the company and so all of it is speculative.
You know, there's a set of about four people who
really know what happened. There are the people who were
on the board that were voting to oust the CEO,
and so there's a very small number of people who
really were in the room when it happened and can
tell us.

Speaker 1 (32:39):
Having said that, Chris, though there's been some pretty strong
reporting on it that I've read, and let me try
to set it up if I could. So, Sam Altman,
this young genius head of open Ai, who I think
is very well liked by the press, considered obviously a
real leader in the field, was the CEO of open Ai.

(33:00):
Two members of the board who were very concerned that
the business model was superseding the ethical considerations of AI.
Is my understanding. Okay, Vivian, you look like you want
to add something, Is that right?

Speaker 2 (33:14):
Uh? Well, all they have said publicly, and I think
I've seen a lot of that there has been some
fantastic reporting, is that sam Aldman was not communicating in
a way that made the boy I forget the wordy exactly,
but communicating to the board in a way that made
them feel comfortable. They didn't specifically say they were worried
the AI was getting out ahead of his skis. I

(33:35):
think there's one other interesting twist in all of this,
which is not to get too technical, but the structure
of open AI is fascinating.

Speaker 1 (33:42):
Well, it's really important, I think to mention that.

Speaker 2 (33:45):
Yeah, it's a not for profit organization of five oh
one C three, which, as someone that has been part
of and led five O one C three, has very
specific governance. They have a governance to a mission, a
stated mission that is part of how the organization is
set up.

Speaker 1 (33:59):
Let me just say interject that their work should benefit
quote unquote humanity as a whole exactly.

Speaker 2 (34:05):
So a not for profit organization is not there to
return shareheld value. It is there for the greater good.
In this case, the exact words that you just quoted,
that not for profit owned, among other things, this for
profit entity that was set up because the resources that
are needed in order to continue to evolve open AI
requires tremendous billions of dollars of resources. So they've set

(34:28):
this up and that entity was able to then bring
in a lot of outside money, billions of dollars to
continue to evolve and see the developments that we've seen
come out of Open Eye AI since then, Chat, GPT
and many many, many other tools. That's not that unusual
a set up. There are other organizations that are set
up like that and worked just fine. But in this

(34:50):
case there was really a lack of alignment, and that
not for profit organization management that I think it was
a four person board decided they were either not in
a loop or not comfortable with where the for profit
entity was, and so they apparently without any consultation with anyone, fired.

Speaker 1 (35:09):
The booted him.

Speaker 2 (35:10):
They booted him, and they didn't really understand what well anyway,
they clearly didn't foresee what the rebound would be.

Speaker 1 (35:16):
Then there's a huge uprising among the employees. I think
eighty percent said they were going to quit if he
was gone, and they were going to follow him to Microsoft,
and then suddenly he's back in business at open AI.
Can you guys help us make sense of it? Chris,
do you want to start.

Speaker 4 (35:33):
Yeah again with the warning that a lot of this
is speculation because only you know the four members who
voted to out him, and the six board members total
really know what happened in the room where it happens.
But the popular understanding right now is that it was
a concern over movie too fast versus having safeguards. But
it may come out with future reporting that it was

(35:54):
about product moves, or about the decision to open up
so much of the access to the technology that they
had to slow down new signups. I mean, I've seen
many people speculate on what the causes were. Also the
possibility that some sort of particularly quantum leap in the
technology caused the board to have anxiety, but at this
point we don't know. I think future reporting, good investigative,

(36:16):
shoe works, shoe leather work right is needed right now
to figure out what actually went.

Speaker 1 (36:21):
Down the loss. I know that Chris just mentioned sort
of a new technology, and I've been reading about this
project q asterisk. I don't even know how you say it, Vivian,
how do you say that?

Speaker 4 (36:31):
Q star?

Speaker 1 (36:33):
Q star has been described as a major breakthrough in
the company's pursuit of artificial general intelligence. So can you
help me understand veloss, what the hell that means and
what that technology was? Do you know?

Speaker 3 (36:47):
Sure? Super happy too. I've read, I mean, all of
the public reporting and some of the peoper is behind it.
But can I give you my spicy take first before
I tell you about q start.

Speaker 1 (36:55):
Oh we love spicy takes. Here a question the.

Speaker 3 (36:58):
Two line here, like this is the telenovella of twenty
twenty three. None of this matters, right, but we love
our tabloid headlines that we have spent so much time
I got to say, hundreds of millions of hours of
human time thinking about Sam Altman and open Ai. Let's
put this in context, and it's so important that we
get this right. Open Ai is a company that was

(37:20):
based on a public paper that taught you how to
do LLLMS large language models, these like these chat GPT
type things. Right. They raised billions of dollars, which they
spent pretty much exclusively on what we call compute right
access to a bunch of computers, and they built the
first product that people could see. Nothing revolutionary happened at

(37:41):
open ai except for the fact that they took this
incredible paper that was done by some amazing scientists and
then just threw money at the problem. And once they did,
what did everybody else do? While then Microsoft threw a
lot of money at the problem at Facebook threw a
lot of money at it, Google threw a lot of
money at it, and they all came up with technologies
that are pretty similar, some are slightly better than ours. Okay,
it's important for us to say this because we spend

(38:02):
a lot of time daifying open ai as if it's
the most amazing thing that's ever happened. And it turns
out that when you have a pretty complex problem and
a pretty complex way to solve it, and you spend
a couple of billion dollars, you can come out with
an answer pretty easily. Okay, I say all that to
you and excuse the mini rant, because now we have
a real question in front of us, right, why is it, Katie,

(38:23):
that we're okay with the world in which a technology
that could change every human life on the planet is
held by seven companies that have these kinds of like
human personal dramas that drive what will happen with them.

Speaker 1 (38:35):
And that's all about regulation. But philosoph before you do that,
you can to start what is QStar before we talk
about regulation, because I've read about it and I'm it's
sort of shrouded in mystery and interest and.

Speaker 3 (38:48):
Again, right, and one of the things I really appreciate
about Chris and neither of us really want to be
a pundit. Right, We've both been experts in this field
for a long time. What I can glean from what's
been publicly reported and from some of the sources I've
talked to, is that it's a shift from focusing on
language as a predictive model to being able to focus
on things like math problems as a reasoning model. So

(39:09):
instead of saying, hey, I've got a sentence you know
twinkle twinkle, Well, we know the next words are probably
little star, it's instead a way to say, well, what
is two plus two? And you might say probabilistically, because
I've looked at everything humans have ever written, Well, when
you say two plus two, it usually followed my equals four.
But if somebody along the way, in some book had
written two plus two equals five, then one in ten

(39:31):
million times the large language model might say, oh, two
plus two equals five. We're trying to fix that, And
so q star says, can we actually reason if we
have two of one thing and two of another what
happens when you put them together. This is a big breakthrough.
It is something that gets us closer to what Chris
described as AGI right, that idea that you've got one
model that can talk about language and I can do

(39:52):
a little bit of math. We don't have any sense yet.
There hasn't been public reporting yet of just how good
of a breakthrough this is. But again, you take seven
or eight hundred really smart people, you give them a
lot of compute, and you say, hey, go figure some
stuff out, and this is what the next breakthrough looks like.
I don't think it's the thing that's going to lead
to terminator style robots and helicopters that are out there
trying to kill us all. That's all I'm saying.

Speaker 1 (40:13):
That's good to know. I appreciate that. Well, I think
you raised the big question, and that obviously is regulation
something that Vivian and I dealt with a lot when
we were on this asping commission.

Speaker 2 (40:26):
They ask in Commission on Information Disorder.

Speaker 1 (40:28):
Thank you very much, Vivian, where it's very, very difficult
to regulate these things. And maybe I see veloss your scowling,
and so you think that's not an accurate statement. I'm
good at reading facial expressions, Belosto.

Speaker 3 (40:45):
Let's start with the question, though, like, why are we
so focused on regulating? Right? What does it mean to regulate?
It means figure out all the ways it can harm
us and limit them from doing so, let me ask
you a different question, like, look, I grew up in
rural Illinois as like a very proud American, but my
parents or not well off. For me, the biggest thing
in the world was being able to access a library.
And I'll tell you why this matters, right, I'd go

(41:06):
to a library that was paid for by a pedance
of tax dollars, that took books and knowledge and all
of these public assets and made them available to me
as a curious YOUMKID. Today, if we're sitting here talking
about AI, you and I are fixed in a conversation
that says AI is owned by private companies. We don't
know how private companies make decisions. Well, our tool is
to regulate them. What if we ask a different question,

(41:29):
why are governments investing in building public purpose AI that's
done transparently, that's actually said. This is like a library,
it's a part of public infrastructure. And when we make
decisions about how AI will be used, that's a public
and democratic conversation, not for a board of four people.
We've seen what happens when you let them make decisions
about AI companies.

Speaker 1 (41:49):
Well, by Chris, why isn't government getting more involved?

Speaker 4 (41:52):
Well, there's a couple of reasons. I mean. One is
at the scale of the US federal government, which I
think is what you mean by government. The response by
the US federal government is often reactive and sectoral. So
what I mean by that is reactive, meaning that often
the US federal government doesn't move in a large scale
until something clearly bad has happened, and something that's so

(42:12):
bad that everyone accepts that it was bad, and thereafter
the US federal government will make a new agency to
govern a particular sector. So bisectoral, what I mean is
we have a sector of the law and in a
branch of the US government around say finance or transportation
or other sectors of our lives, rather than having a

(42:33):
branch of government that works on technologies read large. A
counter example to what I just said is FTC. So
Federal Trade Commission works on antitrust, but under the current
leadership of FTC they have sort of reasserted that part
of the purpose of FDC is to think about consumer
protection as well, so there's an option there for FTC
to be responsive. That said, there are other ways that

(42:54):
the US government operates other than laws, for example, executive orders,
which can be an opportunity for the President to say
this is really important, and I'm demanding that other people
who are in the White House respond or commission reports
on something, and by the spending power of the US government.
So when the US government says we will no longer
give money to any company that doesn't meet this bar

(43:14):
in terms of safety or transparency or other things that
we may want from technologies in general, that has a
huge market effect because without passing any laws, the White
House in this case can actually drive companies to behave
differently for market reasons.

Speaker 3 (43:30):
Here's the thing, right again. I know it's a big statement,
and we're kind of nipping around the edges and we're
kind of saying about what can government do today? But
I'm going to ask the question again, why are we
so okay with the fact that we've just given up
as a public citizen rey to say that we could
actually own and build these tools. There's three things government
could do that I don't think risk touchedock. The first
is that could invest in public compute resources to make

(43:50):
supercomputing available to lots of communities and groups that are
working on AI. I worked with an amazing group called
Indigity Genius. It's a number of indigenous AI scientists. We're
building tools for people to use on reservations that use
AI for their public purpose. There's not compute resources between
Boise and Chicago that they can get access to. Right
we should be spending money on this. There's a bill

(44:11):
in Congress right now. The second is data representation, mandating
that these companies actually include public data sets that are
truly represented with guidelines. This could happen through regulation, it
could happen through policy, it could happen through an EO.
And the last is talent. Why are we so confident
that the only way that you can make a career
in AI to go get a degree and then go

(44:31):
work for one of these companies making whatever six figures.
What if we built a public service core of computer
scientists and data scientists and we're seeing the start of
that under the Biden Prris administration, to actually say, let's
go work in government and let's work in communities to
build AI products. These are three things we could do
that actually have nothing to do with limiting the safety
of EI tools. That's important, but that can't be the

(44:51):
only conversation. And it feels like it is right now.

Speaker 1 (44:54):
And Vivian, don't you think it's weird that this is
all handled by the FTC? I mean, why isn't there
cabinet level position kind of overseeing technology. It seems to
me it's such a huge issue that, you know, new
departments have been established historically, the Department of the Interior,
you know, HHS. I don't even know when they were established,

(45:18):
but it seems to me it's time to establish a
new cabinet level position and a whole infrastructure that can
help manage these issues. Right.

Speaker 2 (45:27):
Yeah, Well, Biden's executive order doesn't quite go that far,
but it's starting to walk towards those space sort of.
Among the many things that the Executive Order says is
deep coordination among various parts of government, more AI expertise
in all of these federal offices. I mean, that's part
of the problem. You don't have people that understand the technology,
it's going to be hard to make do any kind

(45:47):
of regulation. I think they're also limited by what can
be done without the ascent of Congress, since Congress doesn't
seem to be assenting to just about anything right now.

Speaker 4 (45:56):
There's good and bad this idea of focusing new creation
and of branches of government on AI. I like the
idea that government is taking consumer protection seriously. Like that
sounds good, but a loss there is realizing the ways
in which AI is just another technology. So we already
have a Presidential Office of Science and Technology Policy. We
already have funding agencies. I'll show my biases as an academic,

(46:18):
but we have the National Science Foundation. It's been writing
checks since the mid fifties. So there are already ways
for the US government to spur innovation. So I like
the idea of US recognizing that AI is having a
big impact. Again, that's partly about technology, but also part
of the power of markets in our own norms. But
I also don't want to make AI so exceptional that

(46:39):
we don't profit from the lessons learned for dealing with
technologies in general. We've regulated and made safe and made
productive so many forms of technology through both positive and
negative regulation. So I feel like there's lessons to be
learned there that we might lose out at if we
somehow think of AI as being magic and not just
another form of technology.

Speaker 2 (46:58):
HETI I have a quick follow up, which is the
issue with speed. These tech companies are moving really fast,
and government, often for very good reasons, move slowly. Governments,
I should say, because you've also got actions coming out
of the European Union, in the UK, other parts of
the inter governmental organizations like the United Nations, which I
know you're part of that group that's working on this philosophy,
I mean, can they possibly keep up, let alone get

(47:20):
out ahead of this.

Speaker 3 (47:21):
Yeah, you know, I think you're asking exactly the right question.
I'm going to disagree with Chris just in a matter
of degree, which is the sense that AI is exceptional
only in exactly what you refer to, Vivian is the
speed of transformation that's creating in our society, and so yes,
there's a lot to be learned by how we've dealt
with this in the past. But we don't have one
hundred years between the introduction of the Cotton gen and
the creation of the National Labor Relations Board, right, we

(47:42):
don't have that much time now. Look, I think the
question is what are we reacting to and why are
we spending so much more time reacting to tech companies?
Where is there public leadership that says, what's the vision
for what AI should be in human society and how
do we create policy that gets us there. The mandates
of the Secretary General of the UN, Antonio bu Terras

(48:02):
has given us on this high level advisory board to
which I've been avoided is to move beyond just thinking
about the risks of what happens when AI is deployed
by private companies and say, what does it actually look
like to build a governance mechanism they use this AI
to create a better future. That's not how our particular
government system is set up at the moment. I think
the Biden Harris Executive Order, which we refer to a

(48:23):
couple of times on here, was actually a really meaningful
attempt to take a lot of this language and push
it into one hundred page document. It's a start, but
we need a new public conversation. This isn't something to
say government should go figure this out. I think we
need to actually have a public American conversation about what
a future driven by AI looks like, and we need

(48:44):
to figure out where to start that.

Speaker 1 (48:46):
I'd love to follow up by asking how quickly is
it moving? I mean, how different will the world look, say,
in one to five years, Chris, I mean, what are
you seeing in terms of how quickly this technology is evolving?
What's going to look different in a few years.

Speaker 4 (49:04):
I often like to talk about norms, laws, markets, and architecture,
which is this idea from the legal scholar Larry Lesseik
about the forces acting on us can be clustered into
those four groups and they all have their own time scales.
So architecture in this case includes technology which moves. It
feels like it moves really faster, and there might be
some sort of paradigm shift where we're confronted with the

(49:25):
new technology. Markets react very quickly. For example, we create
new job titles like prompt engineer and start paying people
to do that, and we start writing books about how
to use lllms, and then our norms adapt much more
slowly as we get to normative statements like should I
use in court a bunch of citations that were generated
by llms? Is it okay to write the eulogy for

(49:46):
my friend using chatgypt? Those are normative things that we
all have to react to. And then laws, and.

Speaker 1 (49:51):
I'll answer to that question, no, it's not. Go ahead.

Speaker 4 (49:54):
So our norms constantly evolve, and then the laws, as
you pointed out, are generally much slower. Timescale for laws
is much longer than timescale for those. So you know,
chat GPT was a great product innovation. GPT three had
been around for like a year or two before that.
I looked at my notes and saw that I was
teaching GPT three to my class in the spring of

(50:14):
twenty twenty two. And you know, GPT two was around
before that, and chatbots, as I said, were around since
the sixties. So many of these things are not new,
But what changes quickly is our norms and also the
way these things become products. So Opdai has done a
great job of making GPT four the basis for other
plugins and for API access, and other companies have been

(50:36):
built sort of on top of that technology.

Speaker 1 (50:38):
What does that need help translate? What do you mean?

Speaker 4 (50:41):
Sorry? Yeah, sorry, I was a nerdy tangent there. So
katis are application programming interfaces. It basically means I'm going
to allow one program to interact with another program, and
those two programs could be run by totally different companies.
So I could have one company make a computer talk
to a different companies computer, and all sorts of creativity
is unlocked.

Speaker 2 (51:00):
There.

Speaker 1 (51:01):
Can you give me an example.

Speaker 4 (51:02):
Vivian had an itinerary, Now hook it up to Expedia
or Kayak or some other company that will buy the
ticket for you. So you had GPT right us at
poem hook it up to a company that will already
print it for you and mainly your card.

Speaker 1 (51:16):
Yeah, exactly, got it.

Speaker 4 (51:19):
So all of those interfaces are such an onlock to
different people's creativity, and the people again could be you know,
artists or students or other companies. So that's the thing
that's easy to move quickly is you know, let's say
we were all stuck with GPT three from spring of
twenty twenty two. Now that we've had this normative change
that everybody has had their eyes open to, their creativity,
open to the market, open to which means a bunch

(51:41):
of capital flowing to this new opportunity. There's so much
room for things to change real fast. Not because the
tech is advancing so fast and scientists are so smart,
whether or not they are. It's because all of our
norms and our markets are changing so fast. There's very
little viscosity to stop us from coming up with new
ways of doing things now that we have accepted, for example,
moderately hallucinatory and somewhat truthful generative technologies.

Speaker 3 (52:07):
Let me give you two facts that take what Chris
said with them in stark relief. He talked about GPT three,
a lot a model that came out in twenty twenty
out of twenty twenty two. Rather, it took about eighteen
months to train that model. I'll spare you what that means,
but it took about eighteen months of people working with computers.
The newest supercomputer from Nvideo can now train a GPT
three equivalent in four minutes. We have increased the amount

(52:29):
of compute capacity exists on the planet by fifty five
million times in the last ten years. The pace of
change is so incredible here, and when Chris talks about
the human components of that, the pieces of connecting and creativity,
we also have to acknowledge that even just what's possible
is changing almost by the day or by the month.
Q Star that we talked about wouldn't even have been

(52:50):
conceivable two years ago. So who knows what two years
from now will look like. And that's my one last
thought on regulation is we are so we're working so
hard regulating what AI looked like two years ago. Maybe
in the most frontier places, the most brilliant congress people
are saying, what does AI look like today? We have
no idea how to build a policy that regulates what

(53:11):
AI will look like in five years, So we take
control of building AI ourselves.

Speaker 1 (53:16):
In fact, I wanted to ask you both. Jeffrey Hinton,
known as the godfather of AI, spent decades advancing AI,
but we're recently cautioned about the potential existential dangers that
could pose. I feel like you all have kind of
diminished the bad stuff that goes with AI, and I'm
curious if you can give us some sense of how

(53:40):
it could be misused or abused in the wrong hands.

Speaker 4 (53:44):
To be clear, there's lots of bad stuff, it's just
not that particular bad stuff. So there's bad stuff happening
right now all the time. And so Jeff Hinton and
others have portrayed a possible bad thing in the future
that has some unknowable but I think very small probability.
So there are other existential risks right now that don't
evolve anything involving AI. Let's worry about those. But also

(54:06):
when we're talking about AI, there's all sorts of bad
things happening right now with AI. You know, if you automate,
as philosophic saying earlier sexism or it doesn't make it
less sexist. Right for you to have a biased algorithm
and then you automate it so it can be you know,
sexist or show biases at high efficiency at scale, that
doesn't make it any less biased, right, It's still bad.
So I wouldn't say that we're, at least on my point,

(54:28):
trying to minimize the bad stuff. It's just it's not
Jeff Hint, it's bad stuff that I'm more concerned about.

Speaker 3 (54:32):
Let me tell you, when I spend my time on
I spend my time on making sure that communities around
the world are just totally left out of the air revolution.
I'd spend my time thinking about making sure that AI
decisions that affect people's lives have the contours of human
ethics around them. I spend my time making sure that
the people who are building these tools are representative of
all of us. I spend my time making sure that
AI is not being used to run autonomous weapons and

(54:54):
run warfare. These are things that we can all spend
our time on to make sure that AI doesn't actually
make the world worse and maybe makes the world better.
I don't have time to be thinking about what happens
in twenty five years when one man's conception of a
risk comes true. There's a lot of risks that actually
affect our daily lives today that we should be spending
our time making better.

Speaker 2 (55:14):
One of the areas that we're very, very focused on
at the Aspen Institute is the intersection of artificial intelligence,
the upcoming twenty twenty four elections, and societal trust. And
it's a big area of concern. We've seen you even
just from recent elections outside the United States, recently in
Argentina and in Netherlands, Slovakia and Poland. We've seen how

(55:37):
some of the parties, the candidates, the campaigns are using AI,
and there is some significant concern about our twenty twenty
four elections and the ways that AI might impact what
population thinks, how they vote, where they show up. Can
you just share with us, both of you a little
bit about what you're seeing there and what you think

(55:57):
we should be most worried about here?

Speaker 3 (55:59):
If anything, Yeah, a look at a majority of the
world's population is going to the polls next year. You
just have putted out. What I'm concerned about is not
how AI will go and change the elections. It's how
bad actors are going to use AI to do what
they've already done to perforate trust in our society, but
do it even more effectively. I'm worried about things like
somebody deciding to send out three hundred and fifty million

(56:20):
individualized emails to manipulate the way people are going to vote.
And here's the worst part. They don't have to include
any misinformation or lies at all, because what they can
do is look at real factual information, only give you
a version of that story that affects your demographic as
they understand you that analyzes your behaviors and tries to
get you vote a certain way. We don't even have

(56:40):
rules in place on what to do if somebody comes
to you with something where not a single fact is incorrect,
but is architected to manipulate you in some way. This
is where we should be spending our time thinking about
policy and regulation.

Speaker 1 (56:51):
Chris, any thoughts from you.

Speaker 4 (56:53):
It's been so far, just had to summarize it. Yeah,
that's a real concern, and some of it is about
AI as we understand it this year, but some of
it is about the fact that our marketplace of ideas
has become completely algorithmically empowered by a few private companies,
and so all of our conceptions about how, you know,

(57:15):
having lots of people have a free exchange of ideas,
you know, so we're predicated on a very different sort
of game theory of the way people are trading ideas.
In addition to the fact that the digital assets are
so easily manipulated that there's room for creating things that
are that look trustworthy.

Speaker 1 (57:31):
Like Nancy Pelosi intoxicated.

Speaker 4 (57:34):
That's a good example, or.

Speaker 1 (57:35):
Tom Hanks talking and saying something any about a dentist
or something some dental service, or.

Speaker 4 (57:42):
Simply taking video game footage from a video game and
representing it as being from a war zone, which also
has happened time and time again in different military conflicts
and continues to happen. So it doesn't even have to
be deep fixed, right, It can be absolutely cheap fix that,
you know, accelerated by an information platform which is used
to optimize engagement. Now I'm going to go down a

(58:03):
slightly nerdy rant. I can tell anyways it's bad. So
there's a lot of concern there, and there's a very
difficult time for academic researchers to investigate it because the
digital commons is now owned by a few private companies
who are not particularly motivated to share information in a
research friendly way. So it's difficult for us to do
anything that even looks like experiments, which is the way
science has been done for the last century, to do

(58:24):
randomized control trials around different treatments. There's sort of no
framework for doing that technologically nor ethically. The people who
are most concerned about it are not particularly technologically able
to get hold of lots of data and do statistical
analyzes of them, so it's a concern. I mean, it's
a concern politically, it's a concern for researchers who want
to understand it. I'm concerned.

Speaker 2 (58:45):
I'll add one other thing, just as an addendum to this.
There's much we can't accomplish between now and the elections
next year. But one thing we can do, and this
is work that we're taking on a little promotion for
our for the Aspen Institute here is bringing groups together
who are not talking to each other. We did a
deep We spoke to a lot of experts, including both
of the experts here. One of them said something that
hit us, which is that election officials don't understand what

(59:07):
is the potential of what AI can do to cause confusion,
and the AI companies don't understand how democracy works. So
we can bring these groups together to cross educate, cross trained,
to understand each other's risks. That's at least something.

Speaker 1 (59:20):
And Chris, one of our recommendations from our commission, on
which Vivian was a part and I was a co chair,
was to open the doors for scientists and researchers to
actually study these tech companies. But clearly, Vivian, that hasn't happened,
has it?

Speaker 2 (59:36):
Well? We may need a whole other podcast for that,
given the political pressures that are happening on those that
are looking into miss and disinformation and the chilling effect
that that has, but it's it's troubling.

Speaker 1 (59:47):
Well. On that note, Happy holidays everybody. Chris and the
Loss and Vivian, thank you all so much for this conversation.
I hope it's helpful to people who are trying to
wrap their arms around this new technology and the ramifications
it is going to have on all of us. To
all three of you, thank you so much.

Speaker 4 (01:00:08):
Thanks for having us, Thank you, Thanks everybody.

Speaker 1 (01:00:18):
Vivian, you've become such an expert in this area. Did
you hear anything new or interesting or are you as
troubled as ever?

Speaker 2 (01:00:26):
That's a good question. It's not that I heard anything new,
because I spend a lot of time on this space.
But what to me was so revealing about this conversation
is not sort of all the things that we're worried
about are the robot overlords taking over or the deep
fake that's going to, you know, make everybody in the
world believe it. It's the second and third order effects

(01:00:47):
and the fact that so much control over these incredibly
powerful technologies are in the hands of just a few people.
I think they both made those points very very strongly,
and I think it's it's really hopeful focus on sort
of the things that really matter. We get very distracted
by shiny objects and maybe not focusing on the fundamentals.

Speaker 1 (01:01:09):
Like the telenovella story of Sam Altman, where we need
to really focus on the long term implications of all
of this. Well, I think they're both really nice, really smart.
Thank you for introducing me to them, Vivian, and thank
you for being part of the podcast.

Speaker 2 (01:01:26):
Well, thank you for letting me share Dinny's with you. Katie.
It's an incredibly humbling honor, so thank you so much.

Speaker 1 (01:01:39):
Thanks for listening. Everyone. If you have a question for me,
a subject you want us to cover, or you want
to share your thoughts about how you navigate this crazy world,
reach out. You can leave a short message at six
oh nine five point two five to five oh five,
or you can send me a DM on Instagram. I
would love to hear from you. Next Question is a

(01:02:01):
production of iHeartMedia and Katie Couric Media. The executive producers
are me, Katie Kuric, and Courtney ltz Our. Supervising producer
is Ryan Martx, and our producers are Adriana Fazzio and
Meredith Barnes. Julian Weller composed our theme music. For more
information about today's episode, or to sign up for my newsletter,

(01:02:23):
wake Up Call, go to the description in the podcast app,
or visit us at Katiecuric dot com. You can also
find me on Instagram and all my social media channels.
For more podcasts from iHeartRadio, visit the iHeartRadio app, Apple Podcasts,
or wherever you listen to your favorite shows,
Advertise With Us

Popular Podcasts

Dateline NBC
The Nikki Glaser Podcast

The Nikki Glaser Podcast

Every week comedian and infamous roaster Nikki Glaser provides a fun, fast-paced, and brutally honest look into current pop-culture and her own personal life.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.