All Episodes

March 24, 2025 • 11 mins

Greylock Partners Partner and LinkedIn co-founder Reid Hoffman discusses common fears around AI and how AI agents are becoming more advanced than chatbots. He speaks with Bloomberg's Francine Lacqua.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Bloomberg Audio Studios, podcasts, radio news Now.

Speaker 2 (00:08):
Earlier this month, an AI model called Manus went viral
for its apparent ability to act more independently than AI chatbox.

Speaker 1 (00:15):
Now.

Speaker 2 (00:15):
The development of the so called artificial intelligence agents have
raised concerns that they will erode human ability to think.
But Read Hoffman, the Lincoln co founder, Microsoft board member
and Greylock partner, thinks the opposite. Now He's just published
a book in which he basically argues that aisystems gain
greater abilities, they will enhance human agency, hence the book's

(00:35):
title Souperagency? What could possibly go.

Speaker 1 (00:38):
Right with our AI?

Speaker 2 (00:39):
Feature? And read is here with me? Thank you so much,
Read Hoffen for joining us. I mean, this is the
fresh of breath air because there is a lot of
concern and a lot of worry that AI takes over
the computers will be in charge and will basically stop
using our critical thinking.

Speaker 1 (00:53):
Yes, but actually, in fact, if anyone plays with AI today,
it is the most amazing education technology.

Speaker 3 (01:01):
We have created in human history.

Speaker 1 (01:03):
If you want to learn anything, I use it to
learn everything from quantum mechanics to like, huh, I wonder
what cooking souved in this way looks like it's everything.

Speaker 2 (01:12):
But I guess the concern is that, you know, especially
people that go into a first time job, or students
or like you know, college kids don't use their critical
thinking anymore. Because if you just go into chat box
and say write me a song, it with this, this,
and this, it does it for you.

Speaker 1 (01:27):
Well, it definitely can do a bunch of things for you,
but that can help you elevate your game.

Speaker 3 (01:32):
Right.

Speaker 1 (01:33):
So it's a little bit like if you were just
copying Wikipedia as handing in your essay, sure you could
do that, but actually, in fact you should use it
to inspire you to make you think better, to say, hey,
like for example, when I was writing Superagency, I would
put in sections and say, how at a history of
technology specialists critique what I've said, and then I understand

(01:54):
it and I can decide whether or not I change
your editings or and therefore the book gets better.

Speaker 2 (01:59):
So I read and reduce cognitive capabilities. Right, this is
a big concern that we stop thinking, that we think less,
that we think differently.

Speaker 1 (02:06):
Well, I think it makes it just like all technology,
you can approach it being lazy, and so if you
just say, okay, I'm going to outsource it, just like
for example, I'm going to say, whatever the first search
result on Google is, that's the answer. And if you
do it that way, then of course that doesn't help
you extend. But if you do anything in terms of

(02:28):
having it be a dialogue with you, having it extend
your capabilities asking a question, getting an answer, asking another question,
then it it greatly amplifies your capabilities.

Speaker 2 (02:38):
How do you get rid of the biases, because if
you have too many biases then it excus of course democracy.

Speaker 1 (02:46):
Well, so all of the major AI labs are trying
to get it as kind of call it unbiased as possible. Now,
within human perspective and human knowledge, there's always some things
a bias.

Speaker 3 (02:58):
We're always learning.

Speaker 1 (02:59):
Like if we kind of look at human beings fifty
years ago and we are fifty years past and we
look at them and say, oh, they were biased about this,
I'm certain humans fifty years from now will be looking
at us the same way. So it's an ongoing process
with us as well as the technology.

Speaker 2 (03:14):
Is there anything that worries you about AI?

Speaker 1 (03:16):
The primary thing that worries me is I call AI
the cognitive industrial Revolution. It's both for the upside, which
is this whole society we live in, middle class education,
medicine all comes from the Industrial Revolution. That same amplification
is coming, but the transitions are difficult. So the thing
that primarily worries me is to say, look, we're going

(03:38):
to have to navigate this challenging transition, just like the
Industrial Revolution was a challenging transition.

Speaker 3 (03:44):
But that's how we have.

Speaker 1 (03:46):
Our children on our future generations be prosperous and have
amazing societies.

Speaker 3 (03:52):
And so that's the challenge we need to rise to.

Speaker 2 (03:54):
So what's the right way of either designing AI or
designing safeguards for AI?

Speaker 1 (04:01):
Yeah, well, part of it, there's kind of a two
part audience for super agency. One is the people who
are AI fearful or concerned to help them become AI curious.
But it's also for technologists, which is design for human agency,
designed for increasing human agency.

Speaker 3 (04:19):
That should be your.

Speaker 1 (04:20):
Design principle and fundamental And the book is also I
hope helpful for them.

Speaker 2 (04:25):
So read this is basically putting the human you know,
at the center still of everything. So how they fit
into the next decade?

Speaker 1 (04:33):
Yeah, well, AI, I think can be amplification intelligence, not
just artificial intelligence and that amplification the superpowers that we get,
and it's not just part of super agency. Is if
you hit a superpower, that helps me too. That's how
we have superagency together, as long as.

Speaker 2 (04:48):
It's democratic and everybody has it or is that a
question for in a second phase?

Speaker 1 (04:53):
Well, I think one of the good things about it,
And that's part of the reason why the first chapter
is about humanity enters.

Speaker 3 (04:59):
The chat at chat GBT.

Speaker 1 (05:01):
When we build technologies for hundreds of millions and billions
of people, that's that's broadly inclusive. So that your uber
driver has the same iPhone that Tim Cook has. That's
the kind of inclusion that we're targeting.

Speaker 2 (05:15):
Read I mean, I guess evolution is not necessarily progress
full stop. So how do you make sure that this
means progress for the majority of humans?

Speaker 3 (05:25):
Well, so I think.

Speaker 1 (05:27):
Look, I think as we iterate and we participate, we
make progress. And for example, even though you say, well,
we have a whole bunch of cars and that creates
climate change, the cars also create our industrial society. And
by the way, the way that we tackle climate change
is we add carburetor emissions and we add you know,
we new kinds of clean energy and we do evs

(05:48):
and so you know, I tend to be very you know,
as we do iterative deployment and as we bring humanity
into the loop, I tend to think we do make progress.
Now I think again, you make better progress by having
the right kind of design principles, by accepting criticism.

Speaker 3 (06:03):
By talking about it.

Speaker 1 (06:05):
So, you know, I describe myself as a bloomer, which
is not that technology is just great. It's technology engaging
with people as great.

Speaker 2 (06:12):
But it also depends on the people in charge. What
do you think of Sam Altman's performance so far in
leading open AI?

Speaker 1 (06:19):
Well, so I think, Look, I think Sam's great contribution
to humanity will be open AI. And that's what having
done a number of amazing things before and done amazing
investments like Confusion and all the rest. And I think
that his ability to think very big and to have
bet very hard on this, you know a little bit
of technological thesis of scale compute and scale learning systems

(06:43):
is what matters. And that's why open ai has brought
this current revolution to us. And it's these machines learn
and they learn things that we help them learn and
help teach them.

Speaker 2 (06:55):
Is there someone I know You've also had your differences
with Elon Musk. Who's the person in the space that you,
if not admire or listen to the most well.

Speaker 1 (07:05):
Sam Altman is definitely one of them. Kevin Scott at
Microsoft is another, Dario Ahmadi ananthropic as another, James Miyika
at Google as another. I mean, I think part of
the thing that's very important about making AI for humanity
is people who listen to others and talk to others
and accept criticism. And I think that's one of the
things that all of these people are very good at.

Speaker 2 (07:27):
Do you need to regulate it or is it something
that you need to see how it runs and then
think about regulating afterwards.

Speaker 1 (07:34):
So I think what you do is you start with
the absolute minimum regulation you could do for the things
that could be really bad, not for oh, look it
might have a biased picture or might have a bias statement,
like we can iterate, we can fix those as we're going.

Speaker 2 (07:47):
Really bad is what people taking over planes to crash,
the things.

Speaker 1 (07:51):
Like that, you know, cybercrime, et cetera. Regulate for that
right and then do iterative deployment. And by the way,
even though they deployment, you eventually get to fear the regulations. So,
for example, if you try to make everything perfect with
cars before you put them on the road, we'd never
have cars. So you put them on the road and
you go, oh, this for bumpers, this window wipers, this

(08:13):
for and occasionally, like the market doesn't want seat belts,
the car manufacturers all want seat belts, and then of
course the regulators going to say, oh no, no seat
belts is good, We're going to add those.

Speaker 2 (08:23):
I mean, if you regulate for things, I mean terrorism
is bad actors, bad state actors, or so it's I mean,
how do you regulate it unless you have to protect yourself.
So it's basically finding the technology.

Speaker 1 (08:34):
That blocks them. Well, I think it's the regulation is
if you're releasing the technology to the general public, which
could be to the bad actors as well, you're doing
red teaming and safety, you're putting the right security measures
to make sure that you're not bleeding the technology to
rogue states, terrorists, et cetera, that you have a safety
plan that if you go, Okay, why is the technology
I'm building if it does leak or anything else, why

(08:56):
will it.

Speaker 3 (08:57):
Still be safe?

Speaker 1 (08:57):
And how do we continue to have the technology that
makes anything that's in the wild as safe as possible.

Speaker 2 (09:05):
On Nilon must Do you think he has too much
power being so close to the president.

Speaker 1 (09:09):
Well, so, look, I think he's a celebrated entrepreneur.

Speaker 3 (09:13):
But I think that.

Speaker 1 (09:15):
Governments are not companies, like, for example, risking a company
like for example, to say, oh, our ten rockets blow up,
who cares, it doesn't matter the financial system of a
country blows up. That's the cultural revolution that's terrible. So
you actually have to say we take less risk here,
even at the price of some inefficiency, because it's more

(09:35):
important for us to not have things blow up.

Speaker 2 (09:39):
Do you worry that things are going too quickly with
the Trump administration, Well, I.

Speaker 1 (09:43):
Worry that very bad risks are being taken at speed.

Speaker 3 (09:47):
Is not a problem. Risks are a problem.

Speaker 1 (09:50):
And you know, for example, it's like, well, we're just
going to fire a whole bunch of people. Oh oops,
we fired a whole bunch of nuclear safety inspectors. Like
that's the kind of thing that's taking risks that is unwarranted.

Speaker 3 (10:00):
Read.

Speaker 2 (10:00):
I also want to talk to you about China because
Deep Sea kind of got everyone at the edge of
their seat. But where do you have a good understanding
about where China is on AI.

Speaker 3 (10:10):
I have a reasonable understanding.

Speaker 1 (10:12):
I do a fair amount of talking to various people
in China in order to make sure. Part of the
thing that when last year I was going around saying
there was actually an economic race with AI between the
West and China, people are like, oh, no, you're overblowing
that because you just simply don't want.

Speaker 3 (10:27):
To be regulated.

Speaker 1 (10:28):
And I think with deep Seak and everything else, we
see that that race is there, and the Chinese government
has said that they want to be you know, AI leaders,
like leading the world.

Speaker 3 (10:36):
By twenty thirty. I think the race is on.

Speaker 1 (10:39):
I think it's very important for US and our industries
to actually in fact be winning.

Speaker 2 (10:44):
Is this an r Is this the new arms race?

Speaker 1 (10:46):
Well, I don't call it an arms race because it's
primarily an economic race. There is arm stuff components with it,
but yes, it is the economic race.

Speaker 2 (10:54):
Read thank you so much for joining us. That was
Reid Hoffman LinkedIn co founder Gray Luck, partner and of
course author of Super Agency. It's a good book. It's
well written, and it's to the point
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.