Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:20):
This podcast is for
informational purposes only.
Personal views and opinionsexpressed by our podcast guests
are their own and not legaladvice, neither health tax, nor
professional nor officialstatements by their
organizations.
Guest views may not be those ofthe host.
(00:43):
Guest views may not be those ofthe host.
Hello and welcome to AI or Not,the podcast where business
leaders from around the globeshare wisdom and insights that
are needed now to address issuesand guide success in your
artificial intelligence anddigital transformation journey.
(01:03):
We have a unique and specialguest with us today Andrea
Bonim-Blanc.
Andrea is founder and CEO ofGEC Risk Advisory and she is a
leader of governance, riskethics and impacts of emerging
and exponential tech and impactsof emerging and exponential
(01:24):
tech.
We have quite a bit in commonand I certainly value you and
your amazing credentials, sowhat I'm going to do is ask you
to tell us a little bit moreabout yourself, andrea, and
first of all, welcome to AI or.
Speaker 2 (01:42):
Not.
Thank you so much, Pam.
It's a real honor to be here.
Thank you for inviting me to bepart of your wonderful podcast
and I look forward to ourconversation.
Speaker 1 (01:51):
Yeah, Pam, I'm
excited to have you.
So will you start out bytelling me more about yourself,
your career journey, which isfascinating.
What's driving you to do whatyou do today?
What's driving you to do whatyou do today and what you got
planned for tomorrow?
Tell me more.
Speaker 2 (02:08):
Boy, oh boy.
Well, that's quite a greatquestion.
So I'll start with the past,today, and then I'll end with
the future.
So my professional backgroundwas that I got a PhD in
political science and a lawdegree at the same time.
I opted for the practical routeof becoming a practicing lawyer
, but I always had a passion forthe systems thinking that goes
(02:31):
with being a social scientist.
So that kind of informs alittle bit about what I'm doing
today and tomorrow.
But I started my career as alawyer on Wall Street for
several years.
Then I went in-house as ageneral counsel and as a senior
executive in charge of a varietyof different functions, in
addition to being GC.
For about 10 years I alsosupervised enterprise risk
(02:53):
management, corporate socialresponsibility, ethics and
compliance and a few otherthings.
And I was in four differentcompanies almost 20 years as a
C-suite executive.
And in the last company I wasgiven responsibility for
cybersecurity, which was at thetime a new and emerging field to
be in charge of, and I panickedat first.
(03:14):
But then I realized I had tolearn a few things that I didn't
know, especially technologicalthings.
But I did realize in thatexercise, which was a very, but
I did realize in that exercise,which was a very, very valuable
opportunity for me to learn more, that cybersecurity was, in its
essence, governance.
It was about risk management,about good governance and about
(03:42):
having the right people and theright resources to be able to do
it.
So it was a very, very valuableeducational experience for me.
So it was a very, very valuableeducational experience for me,
and at the end of that exerciseI was really intrigued by the
whole tech scene and the wholetech risk, tech ethics and the
sustainability of a variety ofdifferent parts of a corporate
(04:07):
or organizational activity.
But cybersecurity was definitelypart of what I did, and one of
my first gigs, so to speak, wasfor the conference board.
They asked me to do a researchstudy on emerging best practices
in cyber risk governance, andthat was a great platform to
talk to a bunch of greatcompanies and learn more about
it and then also be able toshare some of that.
(04:29):
And I ended up starting acourse at NYU at the School for
Professional Studies Center forGlobal Affairs, where they had a
master's program incybersecurity and national
security.
So I contributed a new coursefor them which was about cyber
leadership, risk and governance,or resilience, and so I've
always had that going on.
(04:50):
I've served now on a few boardsCyber Future Foundation, as
well as WireX, which is a forprofit and what I've done is
I've maintained that interest inthe new technologies that have
been emerging and back in 2017,I co-authored a book on AI,
which is called the AIImperative, which was looking at
(05:12):
some of the early strategic andgovernance issues that we
needed to think about as boardmembers and corporate executives
and managers in terms of AI.
So, throughout this process ofthe last 12 years that I've had
my own business and I've beengiving strategic advice and
governance advice on theseissues, I've also been a member
of the NACD and of the AthenaAlliance and I've done a lot of
(05:36):
extra article writing, speaking,really trying to push the
envelope of understanding thesenew developments.
And so, coming to today andtomorrow the part of your
question, I'm intrigued and bothtroubled and excited about what
these new technologies can dofor us.
(05:57):
The opportunities are explosive, as we all know, especially
with generative AI coming onlineand how it interconnects with
all these other technologieslike biotechnology and compute
and computing, supercomputingand edge computing and Quantum.
One day will probably eat allof our lunches, but in the
(06:18):
meantime, all of these thingsare extraordinarily exciting
because they will push theenvelope of what we can do to
improve health and education somany other things.
But unfortunately, just likewith every other technology,
it's got dual use, and so thedownsides can be really serious
as well, and so those that aredeveloping some of these very
serious and important largelanguage models and all the new
(06:43):
products and services that arecoming online.
We need to think about ourstakeholders responsibly,
whether they're shareholders orcustomers or users or children,
the vulnerable people who arenot necessarily getting the full
benefit of this.
We need to think about all ofthose implications.
So, for me, I have a passionfor thinking about how the
(07:06):
stakeholders can be in the techloop, and that's one of the
things I talk about in my newbook so I've just finished.
Hopefully is to keep helpingwhether it's people,
(07:27):
organizations, even governmentsand agencies, to think
creatively, effectively andresponsibly about the
implications of these newtechnologies and how we can
create guardrails withoutstifling innovation, without
putting too many limits on thegreat innovators that are out
(07:49):
there, but, at the same time,some of those great innovators
don't really think about theresponsibility piece, so we need
to have a coming together.
So that's my big focus.
Going forward is bringing whatI call in my book the tech
masters of the universe togetherwith the tech guardians of the
universe, so that we're talkingto each other and we're not just
(08:11):
competing or, you know, talkingaway from each other.
So I'll stop there.
Speaker 1 (08:17):
That's great, yeah,
so I'm going to take what you
just said, since that's such afascinating background and where
you're going is intriguing.
So how about I take thatconversation that you just had
and let's pull a few threads.
What you just described amoment ago, you talked about the
(08:37):
need for responsible tech.
You mentioned the tech mastersand the guardians, because they
go hand in hand.
You mentioned exponentialcapabilities and how the
technologies are emerging andthat it is exponential, but we
know that we have to haveguardrails.
(08:57):
So let's talk some more aboutthat.
Tell me what you see asresponsible tech and what type
of actions do we need to take tomake sure that the tech
evolution is more responsible?
Speaker 2 (09:16):
Yeah, you know, I
think one of the things and
maybe this is because I'm sortof trained to think this way and
maybe I'm just, you know,originally just this way bit of
a risk manager, governanceperson, ethics person I've
always talked about this outsideof you know, the tech world.
I spent many years as a chiefethics and compliance officer in
(09:38):
several companies and techreally wasn't really a big part
of that.
But now tech suffuseseverything and you know, people
who say that they're not a techcompany are mistaken.
Everybody's a tech company now.
So I think what we need tothink about is some of that
old-fashioned thinking needs tobe updated and refreshed, for
(10:01):
sure, in terms of the ethics andcompliance, for example,
frameworks that we've had andare part of sort of the larger
company's way of doing businesscertainly the better company's
way of doing business.
We need to refresh, update andintegrate that thinking with the
way we create tech products andservices or the way we acquire
(10:23):
whether it's data or algorithmsor software or hardware and we
then integrate it into our ownsupply chain, our own product
creation, development, design,et cetera.
And so we need to have whatmight be called the
old-fashioned great leaders orgood-to-great leaders in place
(10:43):
in the first place as CEOs,fashion great leaders or good to
great leaders in place in thefirst place as CEOs.
And we need to have boardmembers that actually are savvy
about these issues not just theethics and compliance and the
governance and the risk but whoare savvy about tech, who
understand it better than youraverage person like me, for
example, who's learned it on thejob but is not a technologist,
(11:03):
not an engineer, not a scientistor mathematician.
So I don't have those lenses,but I do have these other lenses
, and so I think it's extremelyimportant to have the right
leader in place who is open tolearning and understanding from
people that have the chops andthe skills that the CEO doesn't
(11:23):
have, the top leader doesn'thave, and same thing for the
board.
You need to have a board that isdiverse and representative of
the world we live in and aregoing to live in, not the world
that we used to live in, and sowe have to have those
technologists, those scientists,those cyber experts who are
capable of also being boardmembers, being on boards.
(11:45):
We need more than just oneperson like that.
We need several people on everyboard who can think laterally,
who can think in an integrated360 kind of a way, and who can
ask those really importantquestions about whether certain
activities are taking placearound the acquisition and
(12:05):
integration of data or thedesign of a product or service.
And so smaller things like notsmaller really, because it can
become big, but small thingslike do you have an ethicist
involved in the design of yournew products?
Do you have someone who'stalking with and collaborating
with the engineers and thesoftware people in creating
(12:28):
those new products, in bringingthe data, knowing where the data
is coming from, all this stuffthat relates to big data,
algorithms, generative AI, etcetera.
Do we have those savvy peoplewho are ethics, people who are
working at the inception andduring the development, and so
we need board members and CEOsto think about these things in
the same way?
People who are ethics, peoplewho are working at the inception
and during the development, andso we need board members and
(12:51):
CEOs to think about these thingsin the same way.
Do we have thoseinterdisciplinary teams that
have eyes on?
That's how you integrateresponsibility.
It's not by saying these are oursix principles of
responsibility for AI and youknow it's basically tech washing
.
You put a beautiful thing onyour website and you say you do
it.
Well, let's get a little deeperinto that.
Are you actually doing it?
(13:11):
Do you have all the mechanismsto prove that you're doing that?
Do you have auditors, monitors,people who evaluate these
things, and can they thenrepresent and report up to the
board and to the C-suite thatthese things are happening?
So it has to come from theleader in the first place.
It has to come from the boardin the first place, and this is
(13:33):
why it's so important to have areally savvy board who can hold
the CEO accountable on thisstuff.
It's a very similar story inthe past.
Do you have an ethics program?
Do you have a sustainabilityprogram?
Well, do you have a techresponsibility program?
Are you actually doing thethings that you're supposed to
do to integrate theresponsibility, or are you just
(13:53):
creating another form ofgreenwashing, which we can call
tech washing right?
Speaker 1 (13:59):
So, as you were
talking, I was thinking about an
experience I had this weekactually, and I am on several
boards.
I'm either on an advisorycouncil or a board that's not
for profit.
So one of them I feeloftentimes like the technical
(14:25):
person who is the sidekick ohyeah, so we'll talk about this,
this and this and this, and, ohyeah, the technical piece, right
.
And so this time, this lastmeeting, I just basically let
him know that you want to bemore effective, you need to
think about technology, you needto think about the technology
(14:47):
director, we need to think aboutwhat our technology strategy is
.
Doesn't seem like we have one.
Maybe we can lean on those thathave technical savviness to
help pull together a techstrategy, but where is it?
What is it Right?
And so the eyes came open andwe're rocking and rolling, and
(15:11):
so I think that you mentionedsomething.
You said we need a savvy board,and back in the day, maybe you
would feel like that, if you aretechnical, that you've got what
you need right, you just needthe soft skills, you just need
it.
That is so old school,absolutely.
(15:33):
And as you were talking, Ithought about all that I've
literally had to go through thatwith the boards that I'm on and
point that out, because wethink that when we get to that
level that, hey, we don't needthis.
Well, maybe you don't need to besuper, super technical, but I
don't know how you can provideoversight if you don't have a
sense of what cybersecurity andtechnology is all about, right,
(15:57):
and if you're not in tune withhow technology is emerging, how
are we effective board directors, right?
So you've pointed that out, soI like that.
I think that leaders should betech savvy, evidently, and that
we need to refresh and updateour approach, and I love how you
(16:20):
pointed out that we want tointegrate, talk about and update
the way we integrate and theway we acquire technology.
That's pretty cool, so thankyou.
So here's my next question howdo you see the impact of AI from
the perspectives of culture,generations, and if you can tell
(16:43):
me more about your perspectiveson the debate that's going on
in Silicon Valley.
Speaker 2 (16:48):
Yeah, wow, I just had
a very interesting
off-the-record chat with a groupof women the Athena Alliance of
women, the Athena AllianceYesterday.
We talked about theaccelerationist versus
decelerationist sort of techculture issue that's going on
and I've written about it alsoin my book.
And you have, on the one hand,you have Marc Andreessen and
(17:11):
some of the technologybillionaires who are talking
about we need to grow and weneed to create and innovate
without any guardrails, andthere's something he wrote
called the Techno-OptimistManifesto, which I encourage
everybody to read, and there'sthings there that really kind of
(17:31):
made me jump a little bit,because he basically says the
enemy is, and he mentions all ofthese different things that are
what I do, like ESG andsustainability and risk
management and guardrails andall this stuff, and so I
oversimplify what he says in histechno-optimist manifesto.
But there's a real truth to it,right?
(17:52):
And he's not the only proponentof this.
Clearly, we have Elon Musk andSam Altman and many others who
are really out there to sort ofinnovate no matter what, and,
yes, some guardrails are beingput around them, whether it's
from regulators making noises orother stakeholders making
noises.
But then you have people.
I really love, the Center forHumane Technology, which is run
(18:17):
by Tristan Harris and Asa Raskin, and these guys started I don't
know exactly how long ago, butseveral years ago they started
this and they're famous forhaving put the Social Dilemma
movie together, which basicallytalked about all of the lack of
guardrails in social media,which is now we all bemoan the
terrible things that havehappened through social media.
(18:38):
And early 2023, they puttogether a YouTube conversation
called the AI Dilemma and itactually opened my eyes wide
open when I was I hadn't startedwriting my book yet, but it
gave me a lot of fodder on thisother side of you know this
decelerationist conversation.
(18:58):
Decelerationist is a little bitof a pejorative, and the
techno-optimists also like totalk about people who are
looking for the governance andthe guardrails as doomers.
You can see it also within thewhole open AI saga that we've
seen over the last year or so,where Sam Altman gets fired by
the nonprofit board, which isconcerned about safety, and then
(19:21):
he comes back, and there'vebeen all kinds of other
developments at OpenAI over thelast year, but the point being
that there's this strugglebetween those who are thinking
about governance, guardrails andstakeholders and those who just
want to innovate and use asmuch compute power and
capability as they possibly can,and I really think there's a
middle ground here.
I talk about this a lot in mybook in terms of we don't have
(19:44):
to be extreme over here orextreme over here.
We need to be talking to eachother and having that
interdisciplinary dialogue andrespect for each other, and I
feel like there's a lack ofrespect pretty much in one
direction.
I've written a couple of piecesabout this.
I talked about the diversityimperative in exponential
(20:04):
technology.
I wrote a piece for DiplomaticCourier on this, in which this
cultural divide also exhibitsitself, unfortunately, in very
powerful, very wealthy white menover here and everybody else
over here meaning women, mostlypeople of color, the less
(20:26):
powerful and less wealthy.
So we don't want to have that.
That's not good for society,it's not good for democracy,
it's not good for a lot ofthings, and so through my book,
I'm trying to also sensitizepeople, especially in positions
of power leaders of companiesand others board members to
(20:47):
think about the broaderstakeholder community when we're
talking about tech innovation.
And, by the way, I'm the firstperson in my family who has
always bought the latest tech.
I didn't go online on aphysical line to buy any of the
iPhones, but I was the first oneto order it and check it out.
So I love innovation, I lovenew technology, I love to know
(21:08):
what we can do next.
But it's something where thisculture war that we have which
is also a reflection of some ofour political polarization
unfortunately have, which isalso a reflection of some of our
political polarization.
Unfortunately, it's not helpfulto anyone and it hurts many more
people than it helps, and so weneed to come up with
constructive ways to bridge thegap.
(21:29):
I think is really, and so Imentioned the tech masters of
the universe and the techguardians of the universe,
because that came up as I waswriting the book.
I think it's true.
And then there's a lot ofpeople in between who are worker
bees, right, who are theengineers and the ethicists and
everybody else who's workinghard every day to try to get
this done right.
(21:50):
But there's a lot of cornersbeing cut, and we haven't even
talked about the I don't reallyhave a cute name for them but
the people on the outside ofmainstream society, the
criminals, the gangsters, thenegative nation state actors
that are out there looking touse these tech tools as weapons,
(22:10):
and so that's a whole otherkettle of fish we need to be
aware of, right?
Speaker 1 (22:16):
So I agree that there
are varying cultures.
There's varying perspectivesthat must be taken into account.
Your cybersecurity expertiseand earlier, when you said that
really, cybersecurity is good,strong governance that's how I
(22:37):
feel.
That's why I have alwaysstressed that there is a
requirement that we allunderstand how to secure, from a
physical security and acybersecurity perspective, our
assets.
I don't lean and rely totallyon a CISO, because they can't do
(23:00):
it by themselves, and we shouldbe ever learning and coming
into the knowledge of how tobetter secure our assets,
because our assets, whether it'swith your employer or not,
they're a reflection of us.
We need it right, and so that'swhy I think cybersecurity is so
important, and you said thatearlier and then, as you were
(23:23):
talking through yourperspectives of culture, the
generations, the AI debate inSilicon Valley, I kept thinking
about that and governance atlarge, because what you
described is governance at largeand how governance is emerging,
just as the things that we haveto govern is emerging, so that
is brilliant.
(23:44):
That's a really good analogythere, because we do have to
think about that and ourgovernance models need to evolve
and keep up, and so I thinkthat's just really good.
So can you tell me more?
You did mention your book, sois there more that you can tell
me?
And, as you're kind of talkingabout it, can you tell me what
(24:05):
motivated you to write your book?
If you haven't already, whatcan I expect when it is
published?
Speaker 2 (24:13):
Well, so I am very
relieved that I was able to
deliver my manuscript to mypublisher at the beginning of
September.
It's going to be about a yearbefore it gets published,
because it's a university pressand there's peer review and lots
of production over the last sixmonths or so.
(24:41):
Was it needs to stay relevantin the face of this multiple
fire hydrants not just firehoses, fire hydrants of tech,
change, innovation events takingplace, et cetera, and so, at
the end of the day, what I wastrying to do with this book is
use my lenses, because I don'thave anyone else's lenses, so I
have my lenses of governance,ethics, risk, impact, so I have
(25:01):
those lenses, and I thinkthere's a bunch of other lenses
that need to be brought to thisconversation and to illuminate
what especially people like mewho are not technologists and
the scientists but I think allof us really benefit from
talking to each other, bringingthose lenses in to help develop
(25:22):
the governance of tomorrow, theethics of tech, how this all
impacts from a sustainabilitystandpoint.
There's all that big discussiontaking place on the energy
needs and water needs of all ofthese data centers and expanded
compute capacities, et cetera,and so my background is more in
(25:46):
understanding those things.
But those of us who are like me, we have to update ourselves,
we have to really go with what'scoming at us here and not just
sit back.
And I'm of an age where there'sa lot of people of my age are
serving on boards and stuff andI'm serving on a couple myself
but the point is we can't sitback anymore, we have to lean in
(26:09):
on this stuff.
And so in my book I try toprovide that kind of framework A
to understand the context ofour time.
So I start the book with somemegatrends I've written about
megatrends before, but with alens on the technology impact on
things like socio-ecology,geopolitics, leadership, trust
(26:31):
and the state of capitalism andthe economy.
So, big picture megatrends.
But then I go into a secondpart of the book that looks at
five major groups oftechnologies, just to sort of
scratch the surface for thosewho are not experts.
And so I lent my body toscience to try to understand
(26:53):
some of these things and thenexplain them in English back to
people that are nottechnologists.
So I've done that in part twoand then part three is where I
really sort of apply my lensesdirectly and I have what I call
the exponential governancemindset.
It's five elements that I thinkwe can all use and adopt and
(27:15):
think about as we do our work inour companies or in other kinds
of organizations.
The last two chapters, the lastpart of the book, is really then
saying how do we future-proofourselves as people and
professionals and ourorganizations?
That's one chapter, and theother one is about
future-proofing our globalcommons, everything that we
(27:38):
share in this world.
How do we future-proof theworld that we all share?
That is a world that's veryfragile right now for many
reasons, including climate, ofcourse, but also technology.
It can overnight create someexponential risk that we fear.
Hopefully it won't happen, butit could.
So that's the arc of the book,and it's really a journey to try
(28:02):
to get people like you and meto think about this stuff a
little bit more systematicallyand then start bringing their
talents and skills to thedialogue and that goes back to
the generations again, sorespecting the various
generations and looking atthings from a generational
perspective.
Speaker 1 (28:24):
And then you
mentioned something about
exponential governance mindset.
I think you said that's one ofthe chapters.
Speaker 2 (28:30):
It's a framework that
contains five elements, which
are five chapters, five chapters.
Speaker 1 (28:36):
Can you talk about
that a little more, of course?
I mean, I got to wait to getthe book, but tell me what you
can tell me.
It's going to be a while beforewe get there.
Speaker 2 (28:44):
So everything for me
starts with a good leadership
and good governance.
So having the right people incharge, because, at the end of
the day, what they do and whatthey say sets the tone for the
rest of the organization and ifthere's a bad culture, it's a
reflection on those leaders notpaying attention or not doing
what they say they're doing, etcetera.
So the first chapter I call thefirst element of this mindset.
(29:08):
I call leadership.
But then the more sort ofexplanatory piece I'm always
looking at tech is what I callturbocharging 360 tech
governance is what I callturbocharging 360 tech
governance.
So it's about getting everybodyto think about their role in
the governance of tech, startingwith the early stage designer,
(29:29):
who is bringing algorithms andother tech tools to creating and
designing products and servicesin a company, all the way up to
the board asking the rightquestions.
So there's like a 360 techgovernance that is needed.
That's element number one andthat comes from the CEO and the
board setting the right tonesand parameters on all of these
(29:50):
things.
The second element I call ethosand it's about embedding a
responsible tech culture andthis goes back to some of the
things we talked about a littlebit before, which is the ethics
and the responsibility that acompany applies to how it does
business.
Right?
Do you have the ethics ofcompliance framework?
(30:10):
Do you have a sustainabilityprogram?
Are you connecting the dotsbetween them?
And then how does techintegrate with all that?
So that's the second piece isthe ethos and the culture of the
organization.
The third one I call impact, andthe subtitle of that is
integrating stakeholders in thetech loop.
So you're a technologist, youknow about humans in the loop or
(30:33):
on the loop or off the loop,which is the worst, I guess.
So I did a little play on wordsin this chapter, calling it
stakeholders, integratingstakeholders into the tech loop.
So as we create our programsand our products and services,
we have to think about who areour most important stakeholders.
How is it impacting, even whenthey're not customers?
(30:54):
They might be users, childrenare users, they're not customers
and how are we accounting forintegrating them in that tech
loop?
So that's the third element.
The fourth one I call resilience, but it's actually a catch-all
word for a world of deploying apoly-risk and a poly-crisis
mentality, and what I mean bythat is we live in a world where
(31:17):
many risks are happeningsimultaneously.
Many crises are happeningsimultaneously.
There's this poly-crisis worldthat has kind of become popular
recently.
Well, I've coined poly-risk aspart of that kind of
consideration, because we havemany risks that are overlapping
with each other and egging eachother on.
So that chapter is all aboutrisk, crisis and resilience.
(31:46):
And then the final element ofthis mindset is what I call
foresight, and it's aboutunleashing a future forward tech
strategy.
So when you're putting yourbusiness strategy together and
you could be a nonprofit, youcould be a government agency,
you could be a corporation areyou pulling together all the
elements that you need to thinkfuturistically, to scenario plan
(32:09):
properly, to understand whatare the options that you're?
If you're a company, a business, are you thinking about the
startup that's going to come outof nowhere and eat your lunch
from a technological standpoint?
You know that kind of thing.
So developing a conscientiousfuture forward business strategy
(32:29):
.
So that's the fifth element ofthis exponential governance
mindset.
I know there's a lot of wordshere, but there's a lot of
detail and cases and examplesthat I've put in the book that
illustrate these things.
It's not just academic exercise, let's put it that way.
Speaker 1 (32:44):
So let me just say,
if I go back to my cyber hat,
the first thing I thought aboutis a tabletop exercise where
we're going through.
Some small startup is now hasemerged and they are a threat to
your organization.
Are we going to deal with thisRight?
And so, again, those are someof those fundamentals that we go
(33:07):
back to because that's ITsecurity speak.
Yeah, I mean, it's justnaturally a part of the way we
do business.
There's something I want you toknow about.
I don't know if you are aware ofthis I brought this up before
we started talking but OpenAIhas announced changes to its
safety and security practices,and what they've done is they've
(33:29):
established a new independentboard oversight committee and
apparently this board committeehas more responsibilities than
before.
So this, this safety andsecurity committee, has more
responsibilities.
So now their responsibility isextended beyond recommendations.
It has authority to overseesafety evaluations for major
(33:54):
model releases and exerciseoversight over model launches.
So I thought they were doingthat already, but if I think
about our conversation today,this comes to mind.
So it says the committee willhave the power to delay a
release until safety concernsare adequately addressed.
(34:17):
So your conversation aroundtaking into consideration the
poly risks and also themulti-stakeholder perspectives,
right, so considering thechildren and I heard you say
something like who are thestakeholders and I wrote down as
you were talking who are themost important.
(34:39):
You said that part and I said Iadded who are the most
important stakeholders and why?
So the tabletop exercises canhelp us think through that.
But I'm curious about this SSCand how do you think something
like your vision and your modelfits with what they're talking
(35:02):
about here, and I know youdidn't know about this.
So I just want to get kind oflike your perspective on.
Do you think that kind of gelswith what you're thinking?
Do you suppose they could use alittle bit of insights from you
?
Speaker 2 (35:14):
Well, they can take
it or leave it.
I've been ignored by many overthe years.
I just hope to make a dent bythose who are interested in
these concepts.
Look, I think what you justdescribed sounds very promising
and hopeful and necessary.
It's something that hasn't beenthere, and we all know from the
news over the last year or sothat a lot of the safety super
(35:38):
alignment people and others haveleft OpenAI because they're
concerned about the lack ofsafety guardrails.
The very first one, and mostimportant one, was a chief
scientist, ilya Sutskiver, whooriginally fired Sam, and five
days later Sam came back and hewas sidelined for the next six
months.
But he started his own firm,which just got a billion dollar
(35:59):
first round, I believe, offinancing, and he wants to build
a super safe AGI kind ofcompany.
But he's going to put safetyfirst and so to me that's
leadership and hopefully thegovernance that he builds and
everything else that he buildswithin his company actually
works in that direction.
So maybe OpenAI is catching upwith some of those concepts, but
(36:23):
they lost him and they lost JanLeike, who also was a very
important part of the.
He headed up the superalignment team in OpenAI.
We have a series of anonymousletters and other types of
complaints and concerns fromboth existing and ex employees
of open AI that they didn't havea an environment where they
(36:45):
could feel safe to speak upabout these kinds of things.
So to me it goes back to theessence of the ethos.
The culture that I was talkingabout in one of my elements.
Is it one that is allowinganyone, and on staff or even
subcontractors, to speak up whenthey see a concern?
This is very basic ethics andcompliance 101.
(37:06):
But if you don't have that in ahighly innovative, bleeding
edge kind of technology company,you are flying without a net
and this is very dangerous toall of us as stakeholders
because we don't know what'scoming out on the other end of
this sausage factory, the blackbox of AI, right, and so for me
it's about.
(37:26):
Is OpenAI actually taking thisseriously?
And maybe they are.
They do have a new board andthey do have very sort of
prominent people on their boardand they've brought in some high
level executives from prominentother corporations.
I hope it's more than doing whatlooks good.
I hope it's actually good and Ithink it goes back to who's on
(37:49):
this new safety committee.
Are these truly independentfolks truly able to get the
skinny on whatever they need toget the skinny on, or is it more
of a something that looks good,that looks good to the
regulators, looks good to thestakeholders, but isn't
effective?
So to me, check the box.
(38:09):
Thank you, yes, check the box.
Is it a check the box or is ita real thing?
And so if it is a real thing, Ithink it's exciting and great.
But if it's another sort oftech washing kind of activity,
then I don't think we're movingahead.
But maybe it inspires others.
This is a continuing dialogue,so hopefully this will continue
(38:30):
to move in the right direction.
Speaker 1 (38:32):
I would say let's
watch and see and see if we can
have some inputs along the way.
Exactly, you have been sharingwords of wisdom and your
experiences throughout thisconversation, but I always wrap
up with that question.
So can you please share wordsof wisdom or experiences that
(38:52):
you would like to lead with thelisteners and for me?
Speaker 2 (38:56):
Well, thank you, pam.
Thank you for this wonderfulconversation.
I really enjoyed it and youhave such a wealth of experience
and knowledge also to bring tothis conversation.
Obviously, you're a leader inyour field as well as a podcast
leader, so you do a lot of greatstuff for the community of
listeners.
I mentioned that in one of thelast chapters of the book.
(39:17):
I talk about how do wefuture-proof ourselves and our
organizations, but it reallystarts with each of us right At
the end of the day, we are asgood as who we are, and if we're
not improving ourselves anddoing certain things to
future-proof ourselves, we'renot going to be very capable of
dealing with these reallycutting-edge and bleeding-edge
(39:39):
issues.
So I have a part of one of thechapters that talks about some
of the personal qualities of thefuture-proofed professional.
Let's say and this applies asmuch to a board member as it
does to people coming up throughthe ranks, so to speak there's
10 different qualities that I'vesingled out, but I'll only
mention a handful that I thinkare really important top ones I
(40:02):
think we need to be curious andeducatable.
I think we need to really belifetime learners and we can't
just do a one and done anymore.
We have to have lifetimeeducation going on, always
continuous learning.
We have to be humble about whatwe don't know and I will be the
(40:24):
first one to say to people Idon't know that.
I don't wanna get into troublebecause somebody thinks I'm an
expert in something that I'm not.
I've turned down businessbecause of that.
So I think it's important to behumble and I think the other
thing that's so important and Ithink this accelerationist,
decelerationist debate that wetalked about earlier it's so
(40:44):
important to be empathetic.
We need empathetic leaders,especially, who listen.
They have to make importantdecisions, and sometimes tough
ones, but listen to yourstakeholders, to your community,
to your customer, to your user.
Don't just do things becausethe regulator's waiting with a
hammer.
Be empathetic to thestakeholders and I think I'll
(41:08):
end with somebody called me thisthe other day and I suddenly
realized this is I'm a systemsthinker.
We need more systems thinkinggoing on in our organizations,
connecting the dots of disparatethings.
We need people who are I thinkthe humility piece and the
continuous learning feeds intothis too and we need to think
(41:31):
about technology as another,different parts of technology,
as systems that integrate andoverlap with a whole bunch of
other things that we're morefamiliar with, and we need to
understand how to connect thosedots.
Speaker 1 (41:42):
I think that that's
really great inputs.
Thank you for the conversation,and one of the things that I
took note of because I wastaking notes here is something
that I really appreciate is thisconversation address governance
, but kind of from a differentlens, more from an interactive,
(42:04):
collaborative approach, thefuture thinking approach right,
so, the future of governancefrom the boardroom to the
practitioner.
So I'm so glad becauseoftentimes we leave out the
practitioners and we don't thinkabout the boardroom, we think
about the managers, but it isn'tthat.
So I'm so glad that we had achance to talk about it from
(42:28):
that perspective and cover thegamut, and so I would say it's
an inclusive governancediscussion.
Yes, so thank you very much andI appreciate you taking the
time to talk to me today and Ithink it's going to be very
valuable to the listeners and Ilove those takeaways.
Bye.