Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:12):
Hello, my name is Graham Class and I'm your host
for this season of technically speaking, an Intel podcast. While
Intel is at the forefront of so many cutting edge technologies.
This season is all about artificial intelligence and that's why
I've been tapped as your host. Having a background in tech,
as a software engineer. I was always interested in merging
the advances of artificial intelligence with my love for media
(00:33):
is culminated in one of my other projects, Daily Dad jokes,
an A I powered podcast, churning out jokes and humor
for listeners worldwide.
But artificial Intelligence can do a lot more than help
whip up a corny joke. This technology has been revolutionizing
the way we engage with the world with innovations across healthcare, agriculture,
business and even the public sector.
(00:53):
Another way that artificial intelligence is changing the world is
through philosophy. The term ethical A I is a framework
on how to use A I, what system should be
in place to govern its use with business and consumers.
In this episode, we'll dive into the ethics of artificial
intelligence with one of the pioneers in the field.
Joining me for today's conversation is Intel's Ria Chu R
(01:16):
A can perhaps be described as the moral compass of
the company's A I as an A I software architect
and generative A I evangelist. She is charged with finding
responsible trustworthy solutions for Intel's Internet of Things Engineering group.
Her role exists at the intersection of hardware and software
product design and effective consumer use.
(01:36):
Having studied extensively at Harvard in the subjects of computer
science and data science. The domains of expertise are solutions
for security and privacy in machine learning, fairness, explainable and
responsible A I systems uncertain A I reinforcement learning and
computational models of intelligence. She is a reoccurring keynote speaker
on issues in data science and responsible A I. We
(01:59):
are very excited to have her on the podcast to
share her expertise on Intel's ethics in their A I
development
Ria. Welcome to the show. Thank you,
Speaker 2 (02:10):
Graham. It's awesome to be here.
Speaker 1 (02:12):
I've had a look at your bio and would like
to know how did you come about to join the
Intel family?
Speaker 2 (02:19):
Sure, I joined Intel in 2018 when I was 14
years old as an intern. I had, yes, I had
an amazing mentor who went through all of the legal
pages and the review needed to get me to that position.
So initially, I interviewed with three teams on three different
areas in the A I space. One of them was
around A I and healthcare, very theoretic
(02:40):
and mathematical implications and path finding the other two were
on software development and profiling. And the next was on
deep learning optimization specifically. So I did have the opportunity
to pick the one on optimization for deep learning for hardware.
And that is how I started off my journey at
Intel and got introduced to it. The interplay between hardware
and software is something that always drew my attention. So
(03:02):
when I was able to work on that as part
of my first role as an intern, I was really excited.
Speaker 1 (03:07):
OK, great. So now uh I understand that you're a
software A I architect.
Can you just give an overview of what that entails
Speaker 2 (03:16):
as a software architect? Today? I have a couple of
roles and responsibilities corresponding to the latest and greatest, which
is very exciting to me in my day to day.
The first is generative A I. So looking at and
taking into account the different software optimizations that we're planning
for generative A I, how the workloads are shaping changes
in the algorithms over time as well as also the
associated mechanisms that we see that are
(03:38):
in touch with them as an evangelist. I also get
to work on top of my software architect role as
a marketer and an advocate for these technologies. So creating
very short demos and tutorials for users to quickly grasp
what exactly is going on with this model. How can
I use it in my day to day? How can
I put it to my use case. So a lot
of the focus today for me is on gender to
(03:59):
A I, I also look into ethical and explainable A
I tools and technologies as part of my path finding.
Speaker 1 (04:05):
Yeah, I've been using generative A I apps to do research,
creating podcast, artwork and experimented with creating music. So this
leads me into asking you what's your definition of artificial intelligence?
And maybe some examples of where we're seeing it as
a central topic in the tech world.
Speaker 2 (04:24):
The way that I like to define it is something
I copied over actually from our recent regulations on A
I around how A I models are agents or systems
that are capable of consuming and producing data in an
environment and also taking actions that can in turn influence
our decisions. There's a lot of use cases for them everywhere, healthcare, retail,
(04:45):
et cetera.
Speaker 1 (04:47):
Yeah. When I talked with uh people even in the
tech world, there's a lot of confusion around OK, you've
got algorithms, you've got A I, you've got machine learning
perhaps if you could start with maybe some of the
difference between algorithms versus say A I. What do you
see as the difference between the two
Speaker 2 (05:04):
typical algorithms I'd say are based off of certain schemes
that we're already aware of with machine learning. You have
these new paradigms that are coming in and completely spinning
the narrative, things like continual learning, very large models, different
types of state machines altogether, depending on the application you
integrate it into. So I would say there are some
fundamental differences that are coming in between algorithms and machine
(05:26):
learning models on that front when it comes to use
cases application and of course implementation as well.
Speaker 1 (05:32):
And where I see the power is sort of combining
the traditional sort of if then else algorithms uh with
A I. And I'm just wondering if you've seen any
sort of practical applications merging of all these techniques.
Speaker 2 (05:48):
Yes. And I'm very interested in composite A I. It's
something that I'm getting to work on a lot more
in my day to day. And something that we're actually
doing a demo for at intel innovation where we are
chaining multiple large language models together. The way I see
composite A I is being able to tie together multiple
models as part of a
interface or an application with chaining models. I see it
(06:09):
as a subset of composite A I where you have
models that are linked to each other and have dependencies
on their inputs and outputs. It can be sometimes a
nightmare to get the dependencies altogether because you have cascading
models one after the other dependent on each is output,
but it is possible and it does give you a
lot of applications and opens up the possibilities where you
can get to a very nice user interface that users
(06:31):
can interact with. Developers can build upon businesses and other
communities can just leverage and adopt that is giving you
a lot of capabilities at once with ease of deployment.
Speaker 1 (06:40):
Oh,
that's good. Now, turning to the ethics side of it,
which you've done quite a lot of thinking and work
in how would you define ethics in A I
Speaker 2 (06:50):
with ethical A I, the definition that I like to
adopt is socio technical development of A I systems and
that involves societal and technical aspects, but really focusing on
the implications and the intentions with these
Speaker 1 (07:01):
algorithms. In terms of when you're talking with your peers
and colleagues, it has been a lot of discussion and
talk about trying to have a uniform ethical framework that
at least gives a common language into, you know, when
you're discussing these sorts of things related to ethics in
A I,
Speaker 2 (07:17):
there are common frameworks that are in place. Most of
them are centered around implications and intention and how we
structure that around certain technologies. Right now. It's very popular
for applications, generative A I where we see these frameworks
being put into place around, let's look at the inputs,
the outputs and then the overall modeler framework. And this
may
simplistic, but it really is boiled down to these very
(07:38):
simple elements. Similarly for other A I domains that are
outside of generative A I like object detection, it's very
much focused on what is the particular use case? For example,
is it something that is of high risk like health
care applications or surveillance or is it something that's a
bit lower risk like content creation
and then seeing how exactly our user experience and our
development of those models is echoing ethical A I principles.
(08:00):
So I would say like to summarize, there are different
frameworks and summaries that we apply. But of course, the
templates need to be flexible when we're talking about ethical
A I for these new A I models.
Speaker 1 (08:11):
how do you go about ensuring that your staff and
your engineers and your product managers
actually embed that ethical framework into its A I development.
Speaker 2 (08:22):
Sure, it's such a challenging problem even to describe as
well as you're mentioning it, you know, there's so many
different things that you can actively do, right? Like as
you mentioned, policies, assessments, et cetera. So at Intel, we
take a multiple approaches towards it. The one thing that
we very heavily emphasize on is internal governance. And Laman Nachman,
who's my mentor and also leading the responsibly
(08:44):
efforts at Intel very neatly and concisely describes them as
guardrails that we have internally in place. And these are
really guidelines that are designed to help our developers, engineers, managers,
and our communities and marketers, et cetera. Understand the implications
again of what exactly are we producing in terms of
the content? What are some technical solutions that we can
(09:05):
instill mid pipeline or early on before starting the effort
when we're getting started with A I development efforts
and I would say that that's the core process that
we focus on. We're also very heavily invested in technological development,
whether that's through the deep fake detection work that L
Deir and team are taking on um explainable A I tools,
et cetera. So really trying to approach this from a
(09:25):
governance perspective internally, from a tooling perspective, what we can
provide to the developer community and our customers and to
partners and from a third perspective, regulations, how do we
influence the industry at large and help contribute to discussions
Speaker 1 (09:39):
that's really good. And
you mentioned the work of Lama Nachman and we're actually
going to be talking with her in an upcoming episode
this season. So I'm looking forward to asking her about
this as well. But I think you've said the key phrase,
deep fake, so I might switch the to that side
of things. So in terms of the society and, and
culture in general, um there's some people that are hesitant
(09:59):
about A I particularly around A I limiting jobs, you've
got deep fakes. I've actually created a clone of my voice.
What do you try and do to reassure people who
have hesitations?
Speaker 2 (10:12):
I'm definitely not,
I would say not directly enthusiastic about technologies that are
allowing for passing off as another person for copying and pasting.
Essentially in certain cases, we see the development of those
technologies for a certain use case and then it does
start to stray away from that into some of these
newer kind of applications that are scary as you shared.
(10:34):
So when it comes to reassuring individuals, my family, my
community as well and the industry at large, I think
that it's definitely a problem to see in a straightforward way.
Honestly, without the hype surrounding it, there is a levity
associated with the disadvantages of the technology that we do
need to consider. We also do see the benefits of
them for different things, whether that's improving your ease of
(10:55):
using it, just being able to communicate with others. From
my perspective, what I try to do in my space
is to look at an honest assessment of the technology,
which is very common in the ethical A I domain.
And to see what exactly is it really contributing to
the problem statement and if it isn't contributing to it,
then do we need it?
Speaker 1 (11:13):
And in terms of intel's I guess method or communication
with the society and people at large, are they working
on things to help people?
I feel a little bit more comfortable about this new
world we're moving into.
Speaker 2 (11:29):
Yes. And we, we tackle it from a couple of
different fronts. We've got some amazing teams working on different
parts of the puzzle. One of them is democratization where
one of the challenging things about A I from an
ethical A I perspective, but also in general,
from a development perspective is being able to give communities
access to the technology so that they can test it
and validate it. I've been speaking about ethical A I
(11:51):
for about two years now or so. Last year, we
really didn't have the same amount of tools and techniques
that we have this year and also the popularity of
testing and validating A I systems, right?
We always understand and I think many companies and organizations
understand it's not a one size fits all solution for
ethical A I. Um you know, many companies and organizations
(12:12):
are trying to do their best. So I would say
that again that, that push back that community that we're
trying to create around ethical A I is critical for
us going forward to be able to better build solutions.
Speaker 1 (12:23):
Has there been any case studies within intel that you
could share that maybe there was a real challenging ethical
conundrum
uh for producing A I software and you know, how,
how was it resolved? How did you work
through it?
Speaker 2 (12:37):
Generative A I is definitely a very big one. So
we're always actively cautious about the types of implications of
our technology, whether or not we can incorporate disclaimers or
clarify on the intent of it as well. And um Graham,
one of my favorite parts
of ethical A I from a technical perspective in terms
of solutions is something called model cards. Model cards, clarify
(12:57):
a very simple theme around ethical A I which is,
you know, figure out what exactly is the intention the
core assumptions and the development that went behind the model
and what you're going to use it for as part
of deployment.
And I think that for me personally, I see that
that theme is conveyed as part of our efforts in
generative A I, there's a lot of challenging things out
there when it comes to image generation, copyright, et cetera
(13:18):
or even, you know, object detection related technologies for retail.
If you have solutions like intelligent cue management or automated
self checkout, it makes sense. But you know, how do
we keep it from proliferating otherwise?
Speaker 1 (13:30):
And what sort of work is going on with inclusive
A
Speaker 2 (13:33):
I diversity of state
is critical for the A I models that we're building today,
whether that's detection of skin agnostic of skin tone or
being able to adapt to different folks with different accents.
So at intel and again, across the industry, I think
a lot of the efforts are really about making sure
we have the right people on board, the right experts
with different backgrounds, we're able to contribute to the technologies.
Speaker 1 (13:55):
One
thing when I started looking into machine learning very quickly,
I got a sense of,
you know, being a traditional engineer, you kind of go OK,
input output and you kind of know what's in the
in the black box to transform it. When I started
working with A I and some machine learning code, I
couldn't get a sense of that 1 to 1 kind
(14:16):
of mapping of what the output is to input and
that comes to the to transparency and uh explainability of
A I algorithms.
What are you seeing and also what is intel seeing
around trying to make that understandable to the end users.
Speaker 2 (14:31):
It's a really interesting question because explainability is one of
the first topics that we think about when we think
about responsibly. I and I agree the black box metaphor
has been used so many times um because it's true.
But the key idea is about demystifying what exactly is
going on within the model. Whether that is the internal representation, again,
(14:52):
the data that it's pulling from how the data is
being leveraged feature and
importance, et cetera. There's also an added consideration to explain
ability around surfacing that to an end user. For them
to understand why the model made a decision I would
say with Intel, we're approaching it in a couple of
different ways. And I'm just, I'm very excited to see
how can different experts approach our problems. We have a
(15:12):
dedicated suite of technologies for explainability. I led a team
that was developing one of these for Intel Cno where again,
you're getting that
internal representation analysis, Saliency maps and other technologies for explainability.
We also incorporate transparency and explainability into our algorithm. So
whether that's being able to visualize what's going on again,
saliency maps or you know, really good user experience user
(15:34):
interface to figure out why am I being surfaced this
particular prediction or decision from a model? I'd say that's
a couple of the ways that we are integrating and
thinking about explainability at Intel.
Speaker 1 (15:46):
You're listening to technically speaking an Intel podcast. We'll be
right back.
Welcome back to technically speaking an Intel podcast.
(16:07):
One of the obviously the big things is around the
privacy and security of data. Perhaps you could outline some
of the new techniques and new initiatives out in the
industry to try and use the power of A I
but still protect companies, information and and
data.
Speaker 2 (16:23):
I would say there's mechanisms like differential privacy and many others,
homomorphic encryption. These were incredibly popular two years ago, you
kind of don't hear them a lot now. So again,
the hype is it it depends on the technology of
the day.
But yes, localization is a key thing. It's actually something
I have the opportunity to look at now as part
of my role around hybrid A I edge versus cloud
(16:43):
edge and cloud. So there's a number of different parameters
and assumptions that we can start to make at the
edge around localization privacy of data, not necessarily having
to communicate it back to the cloud that are changing
the way that we think about data privacy and security
for A I models Federated learning is another paradigm like this.
So to put it shortly, I'd say there are mechanisms
(17:04):
that are coming up in place, but there is still
more needed emphasis on security and privacy, more development for technologies,
et cetera.
Speaker 1 (17:12):
OK. So just to extend that just a little bit more.
So say if you're meeting with an executive saying, I've
been hearing all about large language models and I was
talking to my colleague uh in another company and they're
starting to use chatbots with within their organization and using
the power of that is that related to large language models,
but fine tuning it to their own corporate data in
(17:33):
their own
servers. If you like I sort of on the right track.
Speaker 2 (17:37):
Yes, that is a perfect use case. And thank you
for bringing that up, you know, centralization of data on
your server. There's also red teaming um gram that's worth
mentioning where you're testing your model or your system thoroughly
with the generative A I space. There's come to life,
a lot of different types of red teaming approaches including
prompt injection and many others, which is really a
(17:57):
being able to test and mock the kinds of inputs
that adversaries would provide to your model and figure out
how the model is going to behave. What are its
strengths and weaknesses, et cetera. Of course, the compute needed
for that is another story. But in addition to that,
there's also again, the testing and validation approaches. So red
teaming is really critical to that validating how susceptible your
model is to potential attacks, whether it's biased, et cetera.
(18:20):
So lots of, lots of cool and interesting approaches coming up.
Exactly as you noted, that's a key example.
Speaker 1 (18:26):
So going back on the ethics side of things, what
are some of the arguments for a corporation, an organization
to have a clear set of code of ethics and
is intel helping companies establish those sorts of guidelines and frameworks.
Speaker 2 (18:43):
There is a number of different best practices that organizations
can incorporate today for responsible A I. One of them
is the internal governance assessments that we talked about, which
is a step by step process to checking where A
I is used in your organization. How is it being
shipped outside? What's your go to market strategy? What's your
change management strategy, etcetera.
So in terms of Intel's contributions, we're very excited and
(19:05):
passionate about communication with customers and partners and communities in
general around. What exactly can we do to help with
the ethical A I development that can include, you know,
potential compute platforms that help with running this type of solutions, preprocessing,
post processing. What exactly
you need towards that? Or if we have developers working
with Intel Open Veno and I work in the Open
(19:27):
Vino team right now. We want to know what makes
it easier for developers to be able to run these
models and deploy them their feedback in terms of, you know, hey,
you know, is this challenging to use? I don't know
how this is working. Um something that I do as
part of my evangelism team is again, helping contribute to that.
So I would
say that as part of the practices, there's a number
of different things that we do today with solutions with guardrails,
(19:48):
with assessments. And at Intel, we're trying to help with
the communication, the establishment of these elements as well as
the technical solutions and how we can help build foundations
that our partners, customers, the community and industry can take
from there.
Speaker 1 (20:03):
You mentioned that you're part of the Intel Open Vino group.
Perhaps you could spend a bit of time just explaining
what that group does and what your role in. It is.
Speaker 2 (20:12):
Sure. The Intel Open Veno group is a team dedicated
to helping provide capabilities and developing our Open Veno toolkit.
The toolkit is centered around computer vision related applications and
it's recently expanded over five years to generative A I.
And it is really centered around taking models in many
different frameworks like Pytorch, tensorflow, caras, et cetera and converting
(20:34):
and optimizing them to an intermediate representation format that you
can deploy on different hardware, including Intel CP US, GP
US and other types of hardware.
Speaker 1 (20:43):
And have you seen any I guess impact on, on
innovation to, to put it bluntly does having a code
of ethics, put a brake on innovation and
for individual engineers, does it leave them feeling? Oh, maybe
I shouldn't try these things. Is it a hindrance?
Speaker 2 (20:59):
The big question I've encountered this question before but my,
my answer to it is no, it is not. Because
um what again, my personal opinion and what I've also
seen at Intel and through my colleagues, mentors and industry
academia and other circles
at the core of innovation is certain themes like improving
quality of life, et cetera. And as a part of
(21:21):
that human rights responsible A I adoption of technologies and
understanding why you're using technologies with awareness, those are all
key attributes. So I would say if we're able to
design the process in a way that's efficient, that is
incorporating the minimum
requirements and has the flexibility to grow with the technology,
then we're doing it right? And it is not a hindrance.
(21:42):
Time to go to market is a key item. However
responsible A I process is while they may take time,
they don't necessarily have to hinder that goal if they're
streamlined and done efficiently. The onus is on all of
us to be able to contribute to that kind of
strategy or development of that strategy.
Speaker 1 (21:57):
And in terms of the
A I evolving over the next five years, you know,
where do you see it going?
Speaker 2 (22:04):
Human centered A I, that is my personal opinion on it.
I've done a lot of research on it. I also
had the opportunity to author publication on it. Technology that's
centered around the human experience that is contributing to the
way that we think that we act and that we
interact with others
I would say is the key thing. And for me,
that's the most exciting applications, whether that's smart care robots
(22:24):
for the elderly, using generative A I for health care applications,
identifying new protein folding related techniques or something similar. But
centered around the human experience, I would say. So Human
Centered A I is a good theme for that overarching journey.
Speaker 1 (22:39):
Yeah, the Human Centered A I is a very interesting concept.
And have you seen any examples, either in the start
up community or within Intel or in the industry where
you've given some examples? But is any that are actually
like kind of in production today?
Speaker 2 (22:58):
So we have some accessibility research that we've done with Intel.
You know, Laman Kachin also leads the human computer interaction
lab and we see a lot of I see a
lot of great research coming out of that around accessibility,
hearing related initiatives, et cetera. I would say that they're
in the process of being researched right now to my
knowledge across the industry of technologies that we can actively
put in place. But there are blueprints in place for
(23:20):
Human Centered A I
technologies. So it will be exciting to see how they evolve,
how you know, we take into consideration newer models like
generative A I that again, popularity just kind of popped up,
but they've been around for a while. So we need
to see how the technology adapts, but I think it
will stay true. To like the test of time in
five years time and then we will be able to
see and interact with A I applications that are centered
(23:42):
around our experiences around nature, et cetera.
Speaker 1 (23:45):
How do you differentiate the two between
the ethical A I and responsible A I? Um because
in my mind, it's kind of a little bit in
a little bit jumbled.
Speaker 2 (23:54):
Sure, I use the term actually in overlap, uh just
my personal bias towards you. But I, I have seen
that there are differences, there's been multiple efforts to establish
a nomenclature in the ethical A I domain. So responsible
A I is seen more as the internal governance, the
processes and practices that we put towards A I. Whereas
ethically I is seen as really maybe kind of a
(24:16):
combination of the societal and technical aspects as I shared earlier.
So responsibly I in a sense is the accountability and
responsibility part of it.
Speaker 1 (24:24):
Uh I talked earlier about the future of A I.
How is intel gonna be part of that wave in
terms of its programs and solutions for customers
Speaker 2 (24:33):
A I is a key inflection point for us. We're
excited to ride the new wave, collaborate with our again, partners,
customers communities and um see what we can do next.
What's the next great big thing?
Uh Generative A I is definitely a key focus for us.
It's what our customers want, it's what developers want and
it's what users want as well for their content creation
(24:54):
and many, many other needs. So we're very focused on that.
We're also incredibly focused on the compute. I see a
lot of and get to work with a lot of
wonderful engineers that are very passionate about solving these problems
at hand. Specifically these um because there's, you know, so
much that you can do a lot of problems in
the LLM and generative A I
based around, you know, large models, large footprint, changing outputs,
(25:16):
not a lot of predictability, challenging to benchmark, etcetera. So
I think that Intel is working on and actively positioned
to help our customers. Developers provide these types of optimizations,
the right kind of compute et cetera for, for the
new wave of A I but outside of generative A
I also, there's a lot of other A I applications
that we're aware of human centered A I, et cetera
(25:37):
that we are also actively working on. So we're ready.
Speaker 1 (25:41):
Oh, that's, that's good to hear. I've definitely learnt quite
a lot. So thank you very much for your time.
Speaker 2 (25:47):
Thank you, Graham. Appreciate it.
Speaker 1 (25:53):
I would like to thank my guest Ria Chu for
joining me today on this special episode of technically speaking,
an Intel podcast
ethics and artificial intelligence are so important right now. And
what I've learnt from today's discussion with R A, having
a code of ethics can be an important standard, especially
when it comes to deep fakes companies in the media
industry should have a rule about never Impersonating someone without
(26:17):
their knowledge. In my experience, I've been able to clone
my own voice within a day and it's a pretty
good quality
for me as an engineer and a technologist. I think
that's really interesting. However, it does throw up a lot
of questions around ethics and whether we should do these things.
The other thing Ria touched on is human centered A
I And that's really interesting from my perspective, I think
(26:39):
technology has moved towards trying to be human centered. And
it's good to see that A I wave that is
coming is still trying to keep humans as the center
of any product and technology design.
And talking with Rea really did hit home to me
that it is artificial intelligence, but I am looking at
(27:00):
the way that it can actually augment us. I think
that it will augment our jobs. I don't think on
balance that it will take away jobs. You only have
to look back in history from the printing press to
the loom. The A I wave that we're going through
now is just another evolution of us as a species.
And I love discussion around the ethics and the philosophy
(27:22):
of A I,
I hope it will continue.
And that's all for our first episode. Thanks so much
for joining me today. Please join us on Tuesday October
17th for the next episode where we speak with experts
on the way A I is innovating agribusiness solutions. You
can follow me on linkedin and Twitter or X with
the handle at Graham Class or check the show notes
(27:45):
page for links. This has been technically speaking, an Intel podcast,
technically speaking was produced by Ruby Studios from iheartradio in
partnership with Intel and hosted by me Graham Class.
Our executive producer is Molly. So our EP of post
production is James Foster and our supervising producer is Nikia Swinton.
(28:07):
This episode was edited by Ciara Spring and written and
produced by Tyree Rush.