Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tex Stuff, a production from I Heart Radio.
Hey therein Welcome to tex Stuff. I'm your host, Jonathan Strickland.
I'm an executive producer with I Heart Radio. And how
the tech are you? I gotta treat view folks. Today
I had the opportunity to speak with Bena Amnath, the
(00:26):
executive director of Deloitte AI Institute. Vina is an accomplished
technologist and expert in artificial intelligence. She's a coder, she's,
you know, an engineer, and she's a great communicator too.
She has appeared on numerous shows and panels talking about AI.
(00:47):
She's also the author of a book called Trustworthy AI.
And this was a fantastic opportunity to speak with someone
who actually has a deep amount of experience in the
field and really talk about some of the big concepts
in AI and get a little more perspective on them.
And I have to admit Beena's responses really open up
(01:13):
the blinders that I have on And of course I'm
like a lot of people, right I I go through
life thinking I have a pretty good handle on this.
I think I know what's going on, and then I
meet someone else who has had a you know, a
different experience, and especially a different depth of experience in
a particular field and realize, oh gosh, I hadn't even
(01:34):
considered some of these specific scenarios, for example. So I
really very much enjoyed my conversation with Bena. She's also
incredibly good at putting things in a way that are
easily understandable. A lot of technologists, when you start to
talk with them, they get really heavy with jargon or
(01:59):
or concepts. That makes sense if you've had experience working
in that area, but if you haven't, your eyes kind
of glaze over and you just trust that what they
say makes sense. That was not the issue with Bina.
She is really good at talking about this stuff on
a level that that the average person can easily understand,
(02:21):
and yet also really stressing how AI is a very
important component today. I mean, we're seeing it rolled out
in all sorts of different ways across all different sectors.
We mostly talked about business in this conversation, but clearly
AI is everywhere. Whether we're talking about facial recognition technology
(02:43):
that might be built directly into the camera on your phone,
or maybe we're talking about a personal digital assistant, you know,
something like the Amazon one. I won't say her name
because some of you have her and she gets real
like she she perks up when you say her name. Um,
those sort of things obviously have components of AI built
(03:05):
into them, but we were really looking at things like
processes in business where you might need to use automation
and artificial intelligence to make complicated processes more efficient and
less human intensive. And so, yeah, this was a great
(03:27):
conversation and I really feel like I learned a lot.
I hope that you all enjoy it. And again, she
does have a new book out. It's called Trustworthy AI,
and there's a copy on the way to me, so
I'm very eager to read it myself because just talking
with Bina I felt like I was just scratching the surface.
But you're gonna hear all that, So let's get to
(03:49):
that interview. Bina. I want to welcome you to the show.
I am so pleased to have an expert on AI.
Trustworthy AI, no less, Welcome to tech stuff, Jonathan, thank
you so much for having me on your show. I
really enjoy your episodes, so I'm looking forward to having
this conversation with you. I am as well. And one
(04:11):
of the things that I like to do is kind
of set some foundation for any kind of conversation around AI,
because in my experience, and I'm sure you've experienced something
similar chatting with people about AI, it seems like everyone
has a different, sometimes very specific idea of what AI is.
And I'm curious, how do you describe AI to people? Yeah,
(04:36):
that's a great question to start with. So AI is
a form of intelligence that uses machines to do things
that traditionally required human intelligence. So it is artificial intelligence
which is created artificially by machines. So now I like
that description because it covers such a wide spectrum every
(05:00):
thing from sort of the science fiction approach we've all
seen about machines that seem to think like humans to
a point where they usually become the threat. I mean,
that's typically the way we look at it, which is
I'm sure going to come into play when we talk
about trustworthiness, because I'm sure a lot of people aren't
aware how AI can sometimes be a danger, but not
(05:22):
necessarily like sky Net from Terminator type danger. Let me
elaborate on that description than a little bit more on
you know that next level down on AI definition, Right,
you know, the way I think about it. There are
three types of AI. One is artificial narrow intelligence, which
(05:43):
can do a very specific, narrow task that a human
can do, like sort a bunch of photographs, Right, That's
a very narrow specific task. So that's artificial narrow intelligence.
And then there is a form of artificial general intelligence,
which is a form of AI that can do any
(06:03):
task that human beings can do, right, So it is
pretty much everything that a human being can do. And
then I think of a third category, which is artificial
super intelligence, which is a form of intelligence which is
smarter than all human beings combined and can do more
things that human intelligence couldn't do. So when we talk
(06:27):
about AI in the business world or in reality, it
is where we are with AI. It's very much in
that artificial narrow intelligence space. But when we hear a
lot about the in the media or the hype and
the fear, you know, it's talking really about that super
intelligence phase, which is a form of AI that is
(06:50):
smarter than all human beings combined and has more capabilities
that human intelligence. And I think there's a big gap
between reality and between where we you know, we are
anticipating things to be right and and you know. Part
of the reason is, you know, actual super intelligence is
a lot of our human imagination, which is where AI
(07:13):
was when I was studying years ago, right, So I
do think there is value in imagination. I do think
there is value in thinking of worst case scenarios so
that you can address it. But the reality is we
still today don't have the tools or the capabilities to
build out artificial general intelligence or super intelligence, and we
do not have that capability today. I see parallels in
(07:36):
this as well, like I have a very similar description
of autonomous cars, for example, like people talk about autonomous
cars like we've reached level five autonomy, when really I
would argue we're still around level two creeping into level three,
but we we are not close to level four or five.
And this is why I like having this kind of
(07:57):
conversation right up front, so that people kind of set
their expectations, because artificial intelligence can already do incredible things
in these very narrow, narrow uses, and I'm blown away
by it whenever I learned about that. But I do
also see the the allure and sometimes the trap of
(08:20):
extrapolating that beyond the narrow cases and thinking what happens
when this goes beyond that, which it could very well happen,
but we're not we're not at that stage yet. Um.
But but I think of things like, yes, I think
of things like like the image recognition like that to
(08:40):
me is still an amazing thing to see developed, Like
you know it is. Ever since I started covering tech,
the ability has grown so fast. Like I remember when,
at least on the consumer side, you might see something
that was like detecting a ace, not recognizing a face,
(09:01):
but detecting the structure that makes a face and for
a camera, And now you know that that looks like
stone age technology by comparison of what we're seeing today. Yes, Jarthan,
And you I know you've been cowering tech for a
long time. You know you've certainly seen seen the early evolution, right,
(09:23):
but you know, and look, when I studied computer science,
I did program assembly language programming basically using zeros and
ones right that level. And you know the languages that
I use, like Pascal and Fortran lots, you know those
don't even exist today, right, So there's a whole evolution happening.
And I do think that is a big component to
(09:45):
imagine the future so that we can at least go
towards take care of the risk, and focus on all
the good things that you know AI and technology can do. Right,
So imagination is a good thing, but not the fear
a part. And also I think I think what you
said kind of is a great message to anyone who's
(10:06):
interested in really focusing on AI. The fact that you
were working in assembly so close such a low level language,
you get you get a real familiarity with what these
machines can do and their their potential that I think, uh,
you you almost lose when you start working on high
(10:27):
level uh programming languages Like you you get so focused
on what the programming language lets you do. But if
you've worked out at that low level, you're like, hey, no,
I know circuits and wires, Okay, I am, I am,
I am one step away from this machine. Yeah, we
we have to realize, you know, just like you know,
when you talk about it today, will talk mostly about
(10:49):
the software, but the hardware is also evolving. Right. We
certainly don't have wax of those massive back from pure
systems that we had to program, right. I think there
is evolution happening in every dimension, and you know, it's
it's part of the growth of AI or any technology
if you think about it. M hmm, Well, I also
(11:11):
want to know what you mean when you when you
use the phrase trustworthy AI. So what is it that
makes AI trustworthy? And what what's the what's the alternative?
What is the untrustworthy side? Yeah, that's that's a great question,
and that's something that you know. As a technologist, I'm
(11:32):
enamored by all the cool things that AI can do
because I just focus on all the value creation. But
over the past a few years, as AI started becoming real,
I also realized that there are you know, with all
the good things that can do, there are negative consequences
to it, right, and so I put that negative consequences
(11:54):
in the under the bucket of untrusted untrustworthiness. Ethics is
a big composed under of it, right, Whether the AI
is fair or biased or transparent explainable, but also things
like is it compliant with local regulations, does it have
controls in place? Does it have governance in place to
continuously monitor for it going wrongue because you know, Jonathan,
(12:17):
today AI is mostly machine learning, so that it's learning
and evolving. It's not that era when we developed code
put it out there and the code states static and
its behavior was very predictable. With AI, the outputs can
change depending on inputs your feed and it's impossible to
trade on all possible inputs. So trustworthy for me is
(12:37):
you know, really, or is when you have addressed, when
you have thought about and addressed all the possible negative
things that this AI solution can cause. Well, I would
love to kind of dive into a little bit more
of that because one of the things that you said
that really resonated with me was the idea of transparency,
because I have covered this past episodes of tech stuff,
(13:01):
but the sort of the black box problem of creating
a system, for example, a machine learning system, and you
have this this machine that's training itself over and over
and over. Maybe it's adversarial training, maybe you actually have
two systems that are set against each other and you're training,
and the issues that can arise if you have distanced
(13:25):
yourself so far from what the machine is doing that
you are unable to determine the process by which it
arrives at its conclusions. And that to me is one
of those those pitfalls. Yes, but I would also challenge
it a little bit, Jonathan, because that a whole synthesis
of my book is that it depends on the use case.
(13:48):
It is not a one size fit soul. So depending
on where and to solve what problem are you using
that DAIR solution, it is for that organization that teams
decide if transparency is crucial. Right. If if your AI
solution is being used to for patient care in a
(14:09):
hospital system, then transparency is absolutely crucial, right. But if
you are using the AI solution to predict when an
X ray machine might fail, and you're able to predict
at accuracy rate that this machine is going to fail
in the next forty eight hours of call a technician,
transparency may not be as crucial, right. So I think
(14:33):
transparency is crucial depending on the use case. And that's
true for all the other dimensions as well, even fairness
and bias, which we hear a lot about. So it
really depends on the use case that that you're using
the I fall Hey there, Jonathan back at the home studio,
just here to say we are going to take a
quick break, but we'll be back with more with Bena Amanath,
(14:55):
the executive director of Deloitte AI Institute. Bias doesn't necessarily
mean negative, depending upon the use case of the technology.
In some cases you need to have a biased system
because it's specifically meant to be weighted to do one
(15:18):
thing versus another, and without the bias it doesn't do that.
But the way we typically hear about bias is when
it is making a negative impact, when it's something for
it's like like the facial recognition technologies. We've heard plenty
about that. So it is interesting to me, And uh,
I'm curious, like, what are what are some of the
(15:40):
uses of AI you're seeing in technology now that you
find really exciting. Yeah, no, I think you know, we're
still very early on in this technology evolution and there
are still so many use cases to be solved, so
many industries to take a I too right to point
(16:00):
about bias and its relevance. I completely agree with you that,
you know, it depends on the use case, and it
goes back to that first question we talked about, right,
how AI is really emulating human intelligence, which means that
it is going to carry over the biases of the
humans that are building it. Right, But as as a
(16:20):
business or as an organization who's looking to use an
air solution, who's looking to develop an a solution, they
really have to, you know, bring together the stakeholders to
discuss and decide how crucial is fairness or unbiased ness
important in this particular AI use case. And easy one
(16:41):
out is if it doesn't involve human data, then you
probably don't have to worry as biased as a factor
and address it. And if it does involve human data,
then again there is weightage in what right if there
is biased at in an algorithm that is providing personalized marketing,
that you know that there is a weight to it.
(17:02):
And if it is if there is biased in an
algorithm that is supporting law enforcement decisions, that's a higher rate, right.
And it's really about rating it, you know, weighing it
and deciding which ones are the one where biases acceptable
and you can still proceed and get value from the
AI solution, and which are the ones where it is
(17:22):
absolutely not acceptable and you need to stop and figure
out and alternate way to solve for that problem. It's
fascinating because it to me this is starting to sound
and I agree with you, like the machines we build
are in large part reflections upon ourselves, especially when we're
talking about coding and software. I mean obviously that's going
(17:44):
that's a creative process. I don't know that everybody views
it that way, but I think of it very similar
to creating any kind of creative work. It's a reflection
of your process and your you know, the things that
are important to you, the things you've prioritized. And it
makes me think of how we're in an era now
where I'm getting a little in the weeds here, but
(18:06):
we're in an era where we're more likely to address
things like, uh, mental health and the fact that we
need to be mindful and we need to improve ourselves.
And it's almost like taking that same approach, but applying
that sort of thinking to designing a system so that
we are being mindful to create the best system for
whatever purpose it is it's intended to address. Yeah, you've
(18:31):
got it exactly right. The way I think about it is,
how can we reduce the unintended consequences? Right? We know
there are going to be risk associated with it, How
are we going to have a discussion prior to putting
the solution out into the world and then you know,
see all the negative impacts. Can we have a proactive
discussion as part of your project planning meeting or your
(18:54):
design meeting right to proactively identify what are the ways
this could go wrong and fix it. Johnathan, you know,
the easiest example that I can give is we're living
in this very interesting era where you know, AI as
a core technology is developing and you know, there are
all these the value that you're getting from it, and
then there are all these negative things that can happen.
(19:17):
So think about you know, way back when when you
know the cars were first invented, right, we didn't even
have proper roads. We didn't have seed belts, we didn't
have speed limits, right, and be in that phase where
there are cars running on the road. They're taking us
from point to point be faster, so we want to
use it, but we don't have the seat belts put
in place, we don't have the speed limits set in place,
(19:40):
so you're going to see accidents. But we are humans.
We're going to learn from it and we're going to
come up with those speed limits. We're going to figure
out what are those card rails, and it is going
you know, we are going to you know, achieve a
point where you know, we have those guard rails in
place so that you can run with AI faster. It's
just that this interim phase is when you know, we
(20:02):
have to figure it out out in tandem while it's
running in the real world, causing accidents. And in some
cases that's that those accidents can be things where you
have it maybe in a test environment and you think, oh,
this isn't behaving the way I thought it was. But
you know, thank goodness, it hasn't been deployed out in
the real world for or within your company's UH processes,
(20:26):
so you think, oh, well, it didn't wipe out all
of our revenue because it's in a test environment. UH.
And in other cases, I see I see some companies.
I'm not gonna name names, Bina, I'm not gonna put
anyone on blast here, but I have seen some companies
that have taken that kind of idea and applied it
in UH specific deployments of technology where there can have
(20:48):
some some real world negative consequences to end users. UM.
And that to me has always a concern I find, yeah,
I find that I find it hits me wrong. Yes,
And that's the reality of how we've evolved as a
technology in their technology space. It's a bunch of you know,
technologists coming together and building these cool, new shiny technologists. Look.
(21:12):
You know, as I said, I am a technologist in
my DNA my training, and it's very easy to just
focus on all the good things that can do. But
with AI, now that realization has hit, you need other
you know, skill sets at the table, whether it is
a social site, is philosophers, legal and compliance to help
us figure out those seedbells and the you know, the
(21:35):
speed the lanes, you know, because technologies by themselves cannot
do it. So you'll see more of the discussions coming
around ethics and which is resulting in new roles and
new jobs, which becomes core and part of your engineering process. Right,
So that scope of who is involved in designing and
developing AI is definitely increasing. And the other big part,
(21:58):
you know, and this has been a challenge since I
started in tech. You know, there's a lack of diversity
in tech. It's a reality, right, But unfortunately, because AI
is so closely tied to human intelligence, if you don't
have enough diversity from you know, not only from a gender,
race at necessity perspective, but even a diversity of thought, right,
you're the AI solution you built is not going to
(22:21):
be as robust as it could be if you had
a diverse team at the table. Right, you've probably heard
of that classic example of you know, the robotic vacuums, right,
how it was designed and now it was built out.
And then in the Eastern cultures it's normal to sleep
on the floor and it sucked up human somebody who's
(22:41):
sleeping their hair because it was never trade on it.
It didn't come you know, it didn't come to the discussion,
and it was being designed because nobody was there from
that culture. Right. So I think, you know, the realization
that you need more diversity at the table, you need
more controls in place. It's all coming to the forefront.
I definitely see companies addressing it. But the the DNA
(23:03):
will now has been oh, look at all the cool
things this technology can do, let's go put it out right.
But I do think, you know, companies are getting mindful
about it and hopefully we'll reduce the number of unintended consequences. Yeah.
I see the same thing reflected in the open source community,
where you have an open source approach to developing software,
(23:27):
and because it's open and and anyone interested and capable
can contribute ideas get tested, very quickly. New new perspectives
get incorporated very quickly. Things that are working stick around,
things that don't work get improved. And the way I've
described it to other people is, if you have a
(23:50):
closed off garden that you're working on, you're only as
good as the smart people who happen to work for you.
And if you go with this other approach where you
purposefully open it up, which is like the biggest version
of let's let's try and get as much diversity of
thought in here as possible. Uh, you don't have that
limitation because you've You've just said, well, now the world
(24:13):
is I mean it's not the whole world, but but
effectively the world. The world can contribute if if they,
if they wish, and uh agree, I think having that
diversity is absolutely key to creating solutions that work for
as many people and as many potential uses of that
(24:34):
technology as possible. I being a a a white man
in the United States, I am I am essentially the
catered to audience for a lot of tech, and so
I've seen how things that were made to work really
well for me do not work for some other people.
(24:55):
And that's such a tiny little microcosm when we're looking
at you know, the GREA and scope of tech which
goes so far beyond just consumer electronics. UM I absolutely
agree that that diversity is is required if we're going
to have a i that is truly trustworthy. Yeah, exactly,
And you know, and then but there it's not never.
(25:17):
It's never as straightforward as we tend to simplify it
down to, right, like when we talk about explainability. There
there are real challenges and those are real business challenges
on even when you go down the open source route right,
a lot of time, if you go too much on
the explainableitypath, you know, you you have to still share
(25:39):
data and algorithms and those are strategic assets and it
can result in compromising your company's i P. Right, it
can result in you know, security hacks because the more
explainable you make it, it is more susceptible to manipulation
if it's functionality is fully understood, the privacy aspect of it,
prioritizing playability and you know, how do you make sure
(26:03):
you hit a balance of why you are making sure
you're mitigating the risk but at the same time protect
your organizational i P. That's that's a that's a solution.
That's that there is no one single answer. It is
for the stakeholders to come together and discuss it and
identify were that balances, because it's going to be different
(26:26):
depending on your business. It seems to me like you're
saying the real world is a complicated place and there's
a lot of different shades of complexity to it, and
that I can't just simply uh summarize it in a
black and white approach, which I greatly appreciate, uh, and
that that's interesting to me too. I'm glad to have
that perspective because again, like as a as a communicator
(26:48):
for tech, uh, I know that I too fall into
the same sort of pitfalls of oversimplifying for the purposes
of trying to get a concept across, because to really
dive into it, you start to you start to feel
like they're there are so many threads that you can't
see the rope and that or you can't see the
(27:09):
forest for the trees if you prefer. But but that's
that's very important to remember, and I think it is
a great reminder that again, like we said at the top,
that the use for this technology kind of defines the
approach that you need to take in order to make
certain that you're you're getting the result that you want.
(27:33):
UM from a really high level, can you kind of
talk about your concept of what what it is? This
is almost a trick question because there's so many different variations,
but what what what an organization's process would be when
considering to implement AI solutions like high high level approach. Yes,
(27:55):
Historically it's always been you know, how can we use
ARE to solve this business problem? And what's the r
O I what you know? How much profits are we
going to increase by doing this? Or how much costs
are we going to save by doing this? Trust me,
I've done this project and you know that's how you
know every conversation starts because we want to make use
technology to drive more business value, right, whether it is
(28:18):
through customer engagement, optimizing our existing process and so on.
I think the discussion that that if you are serious
about getting making your AI trustworthy, the discussion that needs
to happen upfront is defining what does trustworthy I mean
for for my organization? Right? And uh and it could
(28:39):
be different depending on the organization, It could be different
depending on the use case. But having those high level principles,
and there are plenty of principles out there, there are
plenty of frameworks out there, but I think every organization
needs to think about what are the key pillars that
they agree upon and that they would never want to
void it right. And once you have those, then next
(29:00):
step is to decide to make sure every employee within
your organization understands it. Because it's not just your I
T team, It's not just the engineers of the data
scientists who need to understand ethics. It's that marketing marketing
account person who is looking at using an AI solution,
(29:21):
buying it from a vendor to use it within your company.
They need to make sure that they are asking the
questions which ensure trustworthiness and do they do is the
software they're buying, has it been tested for fairness? What
was it tested for? So every employee within the organization
needs to understand what distrustworthy I mean for my company
(29:41):
and how do I how do I make it, how
do I use it in my role? So role specific training.
And then the other crucial factor to decide, and we've
see variations of it in the industry, is you know
whether it is getting a cheap AI Ethics officer or
setting up an AI thinks advisory board right, making sure
that there is somebody who is responsible to keep you know,
(30:03):
to keep this moing within the organization is super important.
That's more from a people perspective. And then the last
thing is really looking at your existing processes. I don't
think you need to completely come up with new processes
or new controls, but just adding in an trustworthy check
in your existing engineering processes or in your existing development
(30:26):
process or your procurement process to make sure you're checking
for the trustworthiness of any AI that tool that you
buy or that you build, you know, having in addition
to the r O, I ask Sen spent ten percent
of your time to brainstorm on what are the ways
this could go wrong? Right? And capture it and when
(30:46):
you build that technology, put those guard rails in place.
Now it is guaranteed you It is impossible to identify
all the possible ways it could go wrong, but even
if you get you know the ways it could go wrong,
it is better than not thinking about it and not
addressing it. So that is a very comprehensive way you
(31:07):
can do it. But it is all easy. It fits
in with the existing trainings and processes that you already
have in your business. Right. I gotta say, like as
as someone who is a technologist and uh and coming
at this from that angle, that was such a human
centric kind of answer. I really appreciate that. I've had
(31:29):
a lot of discussions with various leadership around different companies
and this idea of of having that explanation and getting
buy in from different departments so that everyone's on the
same page and they have an understanding of the purpose
of a tool, how it's going to be implemented, what
we expect to get out of it. Uh. That's actually
(31:51):
crucial for anything, whether it's a I or not. But
because I've seen so many examples of companies where you
have one department who's like a business development team wanted
us to put this in and I don't understand why.
And if they don't understand why, then you don't get
as good output on the other end of it. I
think making that part of the conversation just as much
(32:12):
as you know, determining the approach to get a trustworthy AI,
I think that's absolutely crucial. Yes, And you know, a
lot of times we think it's a technology problem to fix,
right it's it's a technology. You know, to build trustworthy
air you need to you know, it's a technology problem.
It's your data scientists and engineers, which you think about it,
but that's that's not the case, right, It's a it's
(32:33):
the entire group that needs to come together. And the
risk is not just from a technology perspective. It's a
brand and reputation rusk. There's financial consequences, there's customer satisfaction consequences,
there is so many other risks associated with if your
AI is not trustworthy. Bina and I have a little
(32:54):
bit more to talk about with AI, but before we
get to that, let's take another quick break. I remember
covering that over in the European Union there were various
departments that were even talking about concepts that again are
(33:15):
let science fiction far off concept, but even the concept
of of granting personhood toward sufficiently advanced AI for the
purposes of figuring out accountability and responsibility for when something
goes wrong, who gets held accountable when the AI doesn't
work right? What's your take on that. I think, you know,
(33:37):
we might reach at that at some point, but in
the interim till we don't have that kind of you know,
rules or laws. I think it's absolutely you know, one
of the components dimensions of trustworthy A is defining accountability upfront,
meaning if the AI goes wrong, who is accountable for it?
Who's going to phase the Senate hearing? Who's going to
(33:59):
pay the fine? Is it the data scientists to milit it,
is it the c I O who approved the project?
Is it the CEO or is it a board member? Right? So,
and the good news with that one, you know, talking
about accountability upfront makes everybody proactively think about for the
ways it could go wrong, because you don't want to
put your name on something that might go wrong and
(34:22):
you have not thought about it. So until we get
to that, you know, machine citizens citizen rights level, I
think you know, even today there is a dimension of
trustworthiness which is really around defining putting in a name
for who is accountable when your AI goes wrong. I
(34:42):
agree that that's important. I have seen some of those
Senate hearings with various UH tech people sitting in the sea,
and I know that if I were in one of
these conversations, I would not want to be that person.
And making sure we specifically define who that person is
and that it's not me would be top of my
priority life. Well, I'm also curious then. Uh. So we've
(35:06):
seen in a similar sense some movement on things like
autonomous cars. Uh. In a similar note, I'll talking about accountability,
where we're starting to see more governments try and consider
who is accountable for any accidents that might have happened
under cars autonomous or semi autonomous operation. Obviously that's been
(35:30):
a big point of discussion here in the United States,
and uh, this is one of those things. How how
how closely tied do you think do technology experts need
to be with say politicians who may not have the
insight into tech, but yet are also responsible for creating
(35:54):
and enacting policy that's going to have an effect on tech.
Is do you see there being more cross talk? Yeah,
you know, unlike the car example and the seed belt
and speed limit example. You know, AI does need an
understanding of technology so to come up with those speed limits.
(36:17):
It is so, you know, and we've honestly entered that
era where collaboration is king, right. We have to make
sure that regulators and technologies, uh, policymakers, they have collaborating
and each one is learning from the other. To come
up with the best possible guard rails or regulations or laws,
(36:38):
because this is not something that can be done in isolation,
and like that auto speed limit example. So I think
we're going to see more whether it is an entities
being set up who will drive this collaboration, but there
is definitely, you know, across the globe technologists being pulled together,
(36:59):
whether as an advisory committee or a council. That is
happening now, and you know, I do think we will
start seeing results of that collaboration coming out sooner rather
than later. I think I also believe that just like
I was talking about every organization should train all their employees,
I think every everybody who is involved in the regulation
(37:24):
making process should have a basic understanding of AI, level
of AI fluency, or you know, an understanding of what
does machine learning really mean, what can it do, what
can it not do? So I call it the AI
literacy training, right, So I think it's that's like ground
stakes to drive a productive collaboration. But I think this
(37:45):
is the time for people like you and me, Jonathan,
to really step up and make sure that we're collaborating
closely so that that it's informed and informed and relevant
regulation or relevant policy that's put together. I think relevance
is is absolutely the right word to use. UH. Again,
(38:06):
I'm not putting anyone on blast, but there have been
plenty of stories of people, whether they are in the
regulatory field or general politics, where their level of tech
savvy is probably not even measurable based upon some of
the things we've seen, and that is that is terrifying
(38:27):
when you realize the reach and the effect of technology
and how if you have a misunderstanding of it, you
can tackle something that's not really a problem, but you've
built it up as if it were while completely missing
things that we absolutely need to pay closer attention to.
(38:47):
So I I do try to to make literacy one
of those things that I push for and hopefully I
succeed more often than I fail. Yeah, it's we live
in this era now that you know, at least in
the UH. In the corporate world, right, we're seeing more
(39:09):
and more boards getting more technology savvy. Leaders are leaders
who understand technology so that because every company uses technology,
uses AI no matter which industry they're in, right, So
we're seeing that composition of boards changing, right, And I
don't think we're very far from the time when you know,
(39:30):
having a basic AI or technology understanding will be almost
a prerequisite. Right Again, as I said, we're living in
this interim crazy phase where there's a lot of things
happening and we don't necessarily have all the foundations set up.
The exciting news is for our generation, Jonathan, this is
our opportunity. Right the work we do today is going
(39:52):
to be setting the foundation for future generations. So I think, uh,
you know, having that basic AI literacy, No, it's not
set up, but you know, we we now understand that,
you know, everybody who is involved in policymaking our regulations
need to have that basic understanding. So let's make sure
that you know they have that. That's great, it's it's
(40:14):
it's looking at something that I have defined as a
problem and you have defined as an opportunity, which I
needed to hear honestly, because that's the kind of optimism
that I find really motivating. Been a thank you so
much for being on the show. Your book Trustworthy AI.
I have a copy coming to me. I have not
(40:36):
yet been able to read it. I am so eager
to go cover to cover on this because just this
this conversation has really energized me, and um, you know,
when you have a podcast about tech and you've done
more than sevent episodes, sometimes you feel like I've said
everything there is to say about that, and then I
have a conversation like this and I realized, this is
(40:56):
an Iceberg situation and I've just touched the very tip
of it. There an entire world beneath the surface of
the water that I haven't even scratched. So thank you
so much for coming onto the show, Jonathan. This is
a very energizing conversation for me as well. Thank you
so much for having me on your show. Once again,
I have to thank Bena Amanath for coming on the show. Uh.
(41:19):
I was thrilled at this opportunity when I first got
the email suggesting that I have her on my show,
because to be totally clear, her team reached out to
me and I just didn't even think about that possibility.
I am so glad that I followed up with that.
I do plan on having more interviews on this show
(41:40):
in the near future. I've got a couple more lined up.
I'm gonna try and do that more frequently. It is
I'm gonna be transparent with all of you. It is
very tricky for me because scheduling UH is tricky. People
are very busy, and it gives me a lot of
anxiety just being absolutely transparent with all of you out there.
(42:02):
The the the process of scheduling gives me a lot
of anxiety. So it's something I'm working through and I'm
trying to get more people on the show one because
there's so many interesting people out there. And just with
this conversation with Bena, I really got that that feeling
of I need this because it is giving me more
perspective than what I have and I'm I don't want
(42:23):
tech stuff to just be a narrow laser focus of
what Jonathan thinks about tech. Secondly, UM, you know, I
think that it benefits the show obviously to have that
that extra voice in there, and that means that it
becomes more enjoyable because despite my enormous ego, I realize
(42:44):
I cannot be the most entertaining person in all the world, UH,
no matter how hard I try. So I hope you
all enjoyed this. If you have suggestions for future topics,
maybe you have suggestions for future guests I should try
and get on the show. Reach out to me. Uh,
I promise I will do my best to get that
(43:06):
person on the show. I can't promise that it will happen,
but I'll try and I'll work through this weird stress
I get whenever it comes down to trying to schedule
things and uh, and just to be clear, Bena was
amazing because we actually tried to record that interview on
one day but had a technical issue ended up having
(43:26):
to reschedule. She was amazing. It was really good about
all that. So despite all of my anxiety, everything went great,
which I think is this isn't meant to be a
therapy session. But I think that's very typical for me,
where I get worked up about something turns out that
something wasn't really that big a deal. It was just
the anticipation of it that was the problem. So if
(43:50):
any of you out there suffer from something like that,
you know you have that same sort of experience. Listen,
I got your back. I know how it is. It
is frustrating, but you can do it all right. Eight
PEP talk Over, Episode over. I hope you enjoyed it.
I am on vacation for the rest of the week,
so you should expect some classic episodes for the rest
of this week. But that doesn't mean they're bad. It
(44:11):
just means they're old, just like me. I'm old, but
I'm not bad, and I will talk to you again.
Oh if you want to reach out to me, you
gotta do it on Twitter. The handle for the show
is tech Stuff h s W There. Now I get
to say the end catchphrase, I'll talk to you again
really soon. Tech Stuff is an I Heart Radio production.
(44:38):
For more podcasts from my Heart Radio, visit the i
Heart Radio app, Apple Podcasts, or wherever you listen to
your favorite shows.