Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Great question,
because we often assume, when it
comes to ethical AI orresponsible AI, it's technology
tools, but a lot of it isoutside of the technology tools.
It's who are writing thealgorithms, and I wrote a Forbes
article and there is a lot ofstatistics around it.
But, in a nutshell, a couple ofthings the more diverse your
team is who writes algorithms,think of it as diversity by
(00:22):
design algorithms.
Think of it as diversity bydesign.
Speaker 2 (00:27):
Welcome to Tech
Travels hosted by the seasoned
tech enthusiast and industryexpert, steve Woodard.
With over 25 years ofexperience and a track record of
collaborating with thebrightest minds in technology,
Steve is your seasoned guidethrough the ever-evolving world
of innovation.
Join us as we embark on aninsightful journey, exploring
(00:49):
the past, present and future oftech under Steve's expert
guidance.
Speaker 3 (00:55):
Welcome back, fellow
travelers.
Today we have the honor ofhosting a true industry pioneer.
Today we are excited to haveSwathi Young, who is the CTO of
Alwyn Corporation and CTOAdvisor at SustainChain.
Swathi excels at implementingdata architectures and
artificial intelligence toimprove and enhance the
efficiency across transportation, healthcare and federal sectors
(01:18):
.
She is passionate about usingAI to drive sustainable
solutions and positive socialimpact.
Swathi, it's great to have youhere and to share your insights
on this potential game changerfor artificial intelligence.
Can you tell us a little bitabout yourself and this amazing
journey and how you really gotinto the topic of AI?
Speaker 1 (01:37):
Awesome, thank you so
much for having me, steve, and
I think an overview of myexperience would be a good
starting point.
So I'll take you way back inthe day.
A couple of decades ago I was alittle girl in India playing
carefree in mango trees and justdidn't have a vision of her
(02:00):
future.
Frankly, I was a lot into bothSTEM and arts.
I was passionate about science,but I was also passionate about
Indian classical dance anddrama.
So I really felt like, as I wasgetting into high school and
college, that what is a placefor me, that I could balance my
(02:25):
passions.
So I was thinking likejournalism.
I even explored archaeology,but this was way back.
Think of early 90s in India,which is different from the
India today, and we didn't havethose options readily available
in most cities in India.
And archaeology today is also arare field.
(02:47):
But journalism definitely couldhave been a possibility if I
was in any other country.
But my mom, I want to say, wasvery prescient.
She said just do yourengineering and in the future
you might have a chance to dowhat you want.
And who knew that?
(03:07):
The way she thought might havebeen a vision that in today's
day and age you don't need to bea journalist, to start a blog,
your podcast, your um, yourtiktok videos and youtube
channels right.
So I did do my engineering andlisten to my mother and I got
into the field of technologystraight out of my engineering
(03:30):
school and got a job actuallyheads down doing coding and
writing software code.
This was with Oracle.
It had an India DevelopmentCenter where we used to write
software for Oracle applications.
But my curiosity was notfulfilled.
(03:50):
I was very curious where thesoftware that I was writing was
being used.
I did not know where it wasbeing used.
They talk big names.
It's used for procurement, it'sused in supply chain, and this
is as a 21-year-old.
I was so confused.
I was like I've never seen whata procurement office looks like
.
I didn't know what a purchaseorder was.
(04:12):
I didn't know supply chain.
So my curiosity led me intoconsulting.
I said I want to see where thesoftware is being used.
So I pivoted from developingsoftware to implementing
software.
So that means that I had totravel a lot.
So I was traveling from Indiato US, india to Belgium, where I
did a very interestingconsulting project for GE.
(04:35):
So every time if I look back onmy career, it's my curiosity
that has driven me to takedifferent steps and lead to
various outcomes.
So the same thing holds truefor AI.
This was eight years back whenI first started hearing about
machine learning, artificialintelligence.
(04:55):
Obviously, with technologybackground being in software
development, I deal a lot withdata.
I've done way back in the daydata warehouse projects.
But when I learned aboutmachine learning eight years
back I was like, oh, I know whatdata is and I know the
potential of data, so let me digdeep into what is the potential
(05:15):
of machine learning.
So that's how I got into AIAgain.
My curiosity drove me to learn alot.
I'm an autodidact, did a lot ofUdemy Udacity courses on my own
.
But the main thing is we haveto see when the rubber meets the
road.
So I was grateful for AlwynCorporation to give me multiple
(05:36):
opportunities to explore usingmachine learning for research
purposes, initially in lungcancer research.
So we did a lot of machinelearning projects just for
research purposes.
And then I got very deeplyinvolved with the ethical AI
framework for the US government.
This was a volunteer projectwhere I was working with
(05:58):
multiple people on co-authoringthe ethical AI framework.
So that's how it led me towhere I am today and the last 10
years I've had leadershippositions as a CTO, leading
large and small implementationteams for small, startup,
midsize companies and even forlarge companies like Amtrak.
Speaker 3 (06:19):
Wow.
So, Jeff, it definitely seemslike you have been there in the
trenches and seen from the veryearly beginnings of machine
learning you mentioned abouteight years ago.
It seems like that getting intothe space, having that
background in understanding thedata, I think that is incredible
Because I think with artificialintelligence and machine
(06:40):
learning is you've got to bereally proficient at
understanding the data, how thedata is kind of wrangled
together, how you basically workwith data classifications, data
models definitely seems likeverging on the edge of kind of
data scientist almost.
So it was like you definitelywear multiple hats.
And then you mentioned theethical framework that you'd
worked on with US government.
(07:01):
I'm really interested toexplore more about that and kind
of what you're seeing acrossthe industry.
So let's explore that a littlebit.
What is really kind of when wetalk about an ethical framework?
Can you help illuminate us alittle bit on what that really
means?
Speaker 1 (07:15):
Yeah, that's a very
interesting topic and very
pertinent to the dialogue that'shappening, especially being in
Washington DC on the Hill andthe government and big tech
having conversations with thegovernment, right.
So this was even.
We started this initiative oneyear before the pandemic and
this was an initiative withmultiple leaders actually
(07:36):
working in the federal IT spacethey might be their CIOs and
CTOs in federal agencies and weall came together together.
So it was a collaborationbetween industry, which I
represented, then academia, someuniversity uh membership also
was there and these federalleaders.
We came together and said, hey,um, you know, we are all seeing
(08:00):
the emerging technologies likeartificial intelligence coming
up, but we also aware for thoseof us who are technology
oriented, are aware that withlarge data comes the challenge
of data biases.
So we are human and we havebiases and that will actually
perpetuate into the data andactually exasperate this problem
(08:22):
, right, and actually exacerbatethis problem, right.
So machine learning just uh,reiterating is based on large
quantities of data.
Even for those of you who areenjoying using chat gpt, they
did build a large language modelusing existing data off of the
internet.
So without data, there's noartificial intelligence, right?
(08:43):
Whether it is syntheticallycreated or all the existing
corpus of data.
So we all knew, as peopleworking in this space, something
has to be done and start thisdialogue about various aspects
of ethical considerations.
It could be fairness, it couldbe how to deal with bias,
(09:03):
transparency, responsible use ofAI.
So we divvied up, we formedsubcommittees and created these
working groups where we cametogether, debated and discussed
hey, what are the problems?
How do you identify theproblems?
One, secondly, it's not enoughto identify.
(09:24):
How do you mitigate theproblems?
And, thirdly, how do youeducate and advocate about these
existing issues?
That could lead to really badoutcomes, especially for folks
who are already facing theseissues, and we know that there
are certain sectors of thecommunity that have biases
(09:46):
perpetuated in the communityalready in the society, and that
would be actually exasperatedbecause data we are building
machine learning on top of data.
So, for example, I'll give youa quick example In criminal
justice, if you're using amachine learning algorithm, it
would look into historical data,because machine learning
(10:07):
algorithms are based onhistorical data, and it could be
biased againstAfrican-Americans because they
might have a propensity in thedata to see patterns that they
have had a lot of trans ormisdemeanors, right, which is
not, which is not being veryfair to the person who is
(10:29):
present in in a court.
And I know pro publica haspublished some articles where a
teenage girl was actually whohad stolen a bike, actually went
in front of a judge and whetherit's a bailable offense or not,
and there were somerecommendations, given that it
is a bailable offense eventhough it was a misdemeanor and
(10:50):
she didn't have any history, andthey compared the case to
another you know homogeneouslooking man and how those
outcomes were different.
So it can have far reachingconsequences in society, whether
it is known or unknown,especially if people or
organizations using thesealgorithms don't even declare so
(11:11):
.
If you are getting recruited byan organization and the
organization used an algorithmto make a decision, whether you
have to be interviewed or not,so there is some bias entered
into that decision makingprocess.
So it's very important forthose of us who are in the
technology field to starteducating and advocating the
risks and the bias that could beperpetuated, and this is why we
(11:34):
always say there should be ahuman in the loop for the
decision making efficiency.
But at the same time, a humanin the loop needs to understand
the transparency of what are theattributes or what are the
factors that went intoconsideration before the
algorithm made this decision andsay, hey, interview this person
(11:55):
, do not interview this person.
So there is an onus ontechnologists, but I would not
eliminate the non-technologists.
We have to have an opendialogue with the legal team,
with your HR, everyone in theroom, as to what are the risks
and inherent possibilities ofbias.
Speaker 3 (12:15):
It's definitely a
concern.
I think that there is a lot ofbias that's built into
artificial intelligence and thesocial impact that it's having
across different sectors andespecially across different
communities, and definitelywould love to explore the idea
of coming to kind of a consensusbetween the big tech and
government entities around, kindof helping.
(12:36):
I'm not sure if is there ever away that we ever kind of see AI
built without a bias.
Are there let me flip the coin?
Are there good the coin?
Is there a good type of biasbuilt into artificial
intelligence?
Speaker 1 (12:49):
I recently read a use
case of good bias where they
were recommending somebody whonever had an opportunity for a
job before.
But before we can go down thatpath, one of the things I would
say is that we can learn a lotfrom the healthcare and
biomedical industry, right?
So today, if you ask any health, biomedical or healthcare
(13:14):
researcher, there is apossibility of gene editing.
There is a possibility of braintransplants.
There are a lot of things thatare, you know, technically, uh,
biologically speaking, possible,but these are not allowed
because of the bioethicsprinciples and laws and
regulations, right so?
So I think we can, um, we canlearn from that industry and say
(13:37):
, yes, there is possibilities ofusing ai for a lot of different
use cases, but some of thatmight not be allowed, and I
think we are in the nascentstages compared to where
biomedical sciences has come along way.
We are in the nascent stages ofAI, ai moderation and
guardrails establishment.
(13:58):
I know Biden has issued theexecutive order.
I read that long paper multipletimes.
It's more right now where weare is.
It's more like a, not even at awarning level.
It's more about.
These are the considerations.
The next step should be weshould start giving warnings and
labels.
I think just this past week,broadband has to issue labels
(14:21):
about their speed, etc.
So we should start issuinglabels.
Think of it as labels likenutrition labels on your, on
your food.
Um, you will have to havelabels for ai about how
transparent is the algorithm.
You know things like that.
What is the sources of data?
Since it's it's a complex topic, both technologically and
(14:41):
research wise it's.
We are not yet there, but I seea future where we take a leaf
from the biomedical sciences andsay, well, we shouldn't be
using for criminal justice.
Can we use it to make all ourlaw cases, you know, expedite?
Yes, there is possibility andpotential and it could totally
(15:01):
have the capability to do.
But should we use?
It is a question, and maybe not, just like we don't want to use
gene editing right now,although the science exists.
So I think that's where wewould go.
Speaker 3 (15:14):
What are some of the
challenges that pop up in this
space when you navigateconversations between people in
the tech sphere and people inthe government sphere right?
So you've kind of got peoplewho have a very deep level of
technical proficiency when itcomes to artificial intelligence
and some of the people who workon the government side.
What are some of the challenges?
(15:36):
Is it more of a learning curvethat they maybe not understand
the tech fully, or is it justsomething where they understand
but they're trying to wranglehow they implement policies
around the tech?
Or is it just something wherethey understand but they're
trying to wrangle how theyimplement policies around the
tech?
What are you seeing?
Speaker 1 (15:49):
a bit of both.
Yeah, it's a.
It's a good point.
We are actually having someconversations with people on the
hill.
For another project I'm workingum with georgetown because I'm
pursuing my executive mba thereand I'm actually doing a
capstone project.
So we've had a lot ofconversations in the AI
governance space.
I think it's a bit of both,because the complexity of AI is
(16:12):
it's a, it's a general utility,analogous to electricity, so the
use cases are multitude.
You can't even start, you know,documenting the use cases, just
like electricity, and I think,to the extent I can make an
analogy to electricity, thecomplexity of producing it is
also very complex andcomprehensive.
(16:34):
So there is a bit of that and alot of people on the Hill are
coming up to speed and gettingeducated.
And this is where I come inwith my videos and try to do
education and advocacy around,demystifying some of the AI,
because for those of the folkswho are not in technology, they
don't need to know the sciencebehind algorithms, right, the
(16:56):
math behind algorithms, to theextent that people think they
want to know, because we don'twant to go and see how is
electric current produced, howare the electrons moving?
I was just reading a book aboutelectricity last night to my
eight-year-old and I was like,oh my God, this is a complex
process of how electric currentflows through the wire, so
similar to that.
(17:17):
If I could make an analogy, wemight not need to know the math
behind it.
What is more important is toknow what are the inputs that go
into this decision making.
So, for example, if it's usedfor recruitment, the question is
are you looking at all thehistorical data of all the
candidates who have applied andthe candidates who have been
(17:38):
rejected?
Are you looking at historicaldata of your organization and
the promotions that havehappened in your organization?
Are you looking at gender data?
Are you obfuscating the genderdata?
Are you anonymizing the data?
These are the questions, thelogical questions, anyone in
that particular field should beasking.
So that is where I try to havea lot of conversations with
(18:01):
people.
As to, you are a subject matterand a domain expert in your area
.
The types of questions youshould ask to you are a subject
matter and a domain expertexpert in your area.
The types of questions youshould ask about ai are these
what are the inputs?
How is so inputs?
How is this black box called ai?
And the algorithms?
How is it processed, as in,what is the weightage of the
attributes?
If you're taking again therecruitment case, if you're
(18:21):
taking thousand of youremployees as an input to your
recruitment algorithm, are youanonymizing the gender and what
are the outcomes?
But even though you anonymizethe gender, maybe women have
been less promoted.
What are you doing about it?
Because your outcomerecommendation might be this
person has to be interviewed ornot.
So if you're a subject matterexpert, learn to ask the right
(18:44):
questions about the inputs theprocess that's happening or the
business logic that's happeningand the outputs.
And why is the output this way?
Why is it a false positive or afalse negative, and what is the
rationale behind it.
If you can know to that extent,I think you're in a good place.
Speaker 3 (19:02):
That's incredible.
I think that's the hard part,part right, I think that's the
part that we're really as kindof maybe even from the
technologist perspective, um,you know, even for some of us
who are in it, uh, there's stilla there's still almost kind of
like an ocean of questions toask and you almost can get stuck
in paralysis by analysis of ofnot knowing what's the right
questions.
Um, and it's interestingbecause you kind of say, well,
(19:24):
here's a series of questions youshould ask it and then here's
kind of a series of kind ofoutputs that you should look to
derive from that type of datayou mentioned around.
I want to touch on theinclusive, inclusive part around
the inclusive and diversitywithin the AI strategy.
Can you kind of paint a pictureof really what this looks like
in practice when we talk aboutinclusion and diversity within
(19:46):
AI?
Speaker 1 (19:47):
Yeah, I think it's a
great question because we often
assume, when it comes to ethicalAI or responsible AI, who are
writing AI algorithms, who aredoing AI research and there's a
(20:16):
lot of statistics around itwhich I have covered in my
Forbes article.
But in a nutshell, a couple ofthings.
The more diverse your team is,who writes these algorithms, the
more inclusive your outcomeswould be be.
Think of it as diversity bydesign.
So you are having these diverseperspectives.
That's come into play.
(20:36):
So, for example, when it comesto large language models, if
somebody is from russia, theythink when this large language
model has to be used with therussian language, they are more,
you know, conversant in thenuances of that language.
Similarly, diverse engineeringmachine learning teams will
bring that nuanced approach todesign and ask the right
(20:58):
questions about being inclusive.
So that's being inclusive inyour machine learning team.
But the second aspect is isbeing being inclusive of your
data sets and data sources.
So, so, very small example,think of it if you're doing
analysis on some healthcare dataand your government agencies
will get healthcare data fromall the hospitals in the US and
(21:21):
there might be some regions andpockets in the US they're not
getting healthcare data and theycan move on.
They can, you know,hypothetically move on without
that data.
But the question is, you haveto dig deep into why are certain
regions not giving the data?
The reason could be a lack ofaccess of healthcare in those
certain pockets, right?
(21:42):
So by asking those questions, atechnology question about your
data sources, you're hitting ona societal point and a challenge
, right?
So this is where we have tohave those inclusive set of
people in the conversation whenyou're designing an algorithm.
So if you had your subjectmatter experts and healthcare
officials to say, hey, you'renot getting the data and you
(22:05):
might omit that data becausethere's no data that exists,
because there is a lack ofaccess of healthcare to certain
individuals and certain pocketsof communities, and maybe there
is another way to get the databecause otherwise you're by
omission, you're excluding thatpopulation in your outcomes.
So those kind of questions alsocould be very useful to include
(22:28):
everybody, a diverse team ofyour subject matter experts.
Useful to include everybody, adiverse team of your subject
matter experts.
And the last point I want tosay is it's not enough to have a
diverse team and inclusive team.
You should have an inclusiveculture in your organization.
And, as a woman leader and awoman of color, I can speak
personally.
I will not thrive well if theculture is not inclusive.
Right, because you have to bevery inclusive and give
(22:52):
opportunities for the people whoare as part of your
organization.
Speaker 3 (22:57):
Yeah, 100% agree,
absolutely.
You mentioned healthcare acouple of times and I wanted to
kind of explore some of thosethings.
You mentioned healthcare.
You mentioned data that there'smore and more emergence of
artificial intelligence beingused by more practitioners For
example, doctors working insurgical centers and emergency
care units being able to triagepatients that come in with
(23:19):
certain symptoms and look acrossmaybe a person's potential
medical history to see whatmedications they're taking and
then giving them the ability tohave some sort of predictive and
prescriptive type of approachfor care.
And you mentioned a human stillhas to be in the room, right,
the doctor is still looking at,interpreting and then making the
decision based upon somerecommendations they're giving.
What are you seeing, also fromthe healthcare space kind of
(23:42):
around the adoption of AI intomore of our things like
emergency room centers or urgentcare centers?
Speaker 1 (23:49):
I can't speak to
urgent care or emergency because
I have no personal experienceand I've not read any in-depth
use cases, but there's a lot ofactivity happening in areas such
as hospital management, inareas such as research to
increase patient outcomes, inareas such as pharma.
(24:09):
The big pharma are alreadyleveraging it to reduce the time
for clinical trials.
We know that is the long pulland there's a lot.
There's a lot of activity andresearch happening in, uh, hyper
personalized medication,because one of the things we
always know is, like themedicine you take and the
medicine I take will reactdifferently because of my genes
(24:30):
and your genes, because we comefrom different lineages, right?
So there's a lot of activityhappening in that space, for
sure.
With respect to diagnosis, Ithink the activity that I'm
noticing is there's a lot ofstill a lot of handwritten notes
and things of that naturehappening that is actually being
(24:50):
leveraged using naturallanguage processing and getting
to improve diagnostic accuracy,and the biggest area that's
being adopted is also radiology.
There is one interestingproject I did in the lung cancer
research space where we tookthe CT scans of lung cancer,
images of lung cancer patients,and we sort of automated reading
(25:13):
of the CT scans using machinelearning and vision processing,
so there's a lot happening inthat area as well.
Speaker 3 (25:21):
That's incredible to
see that.
What are your predictions forthe next five years?
Where do you kind of see AItrending?
Where do you see governmentcoming out in terms of being
able to meet the technologistskind of almost kind of at a
tipping point where there is akind of a joint venture
partnership between governmentand tech industries where they
(25:42):
can come together, collaborateand actually put some real kind
of maybe regulations around AI?
Where do you see that happeningin the next couple of years?
Speaker 1 (25:53):
Yeah, that's a very
interesting question.
I recently wrote also a blog, avery detailed blog, after the
European Union actually came upwith very stringent measures
against AI use cases,implementation use cases and so
on.
Maybe US will follow suit.
Obviously, we err on the sideof innovation compared to
(26:15):
establishing more regulation,right?
So, um, the the thing I see isthe rate at which, especially
with since chat gpt, the rate atwhich the ai technology is
accelerating, both because ofthe hardware think of n, nvidia
and Intel coming up andproviding the GPUs and as well
(26:35):
as the availability and thecorpus of these language models.
Many of them are open source,sometimes they're closed box
it's accelerating like.
It's like unprecedented, like.
One year back I think I wasreading somewhere one year back,
chad GPT Tokens could read ablog.
Today it can read like LeoTolstoy's 500 page DOM or in
(26:58):
piece, right.
So it's accelerating at anunprecedented pace and, from
what we heard from Sam Altman'sinterviews, artificial general
intelligence is on the horizon.
It's interesting.
Eight years back, when I gotinto the world of machine
learning, I was so curious aboutartificial general intelligence
.
I went and read so many papersand discourses from world's
(27:21):
leading authorities on AIresearch this was eight years
back and their prediction atthat point of AGI was 50 years.
So but looking at the pace atwhich OpenAI is accelerating, I
would say in the next two yearsit could be a possibility, agi.
But I think we as a society andhumanity as a whole, the
(27:43):
adoption and people have a lotof friction to adopting things
very quickly, right?
So and I deal with a lot offederal agencies and a lot of
other agencies you won't believethat some of the organizations
are still in paper-basedprocessing.
They have not even come to thedigital transformation.
So, while there is incrediblepossibilities of adoption, I
(28:09):
think the adoption will slowdown and the hype cycle will
slow down.
But I think it will betechnically feasible to have an
artificial general intelligenceand, um, the government will
come around to establishingstringent guard rails because,
like I said, we've learned a lotfrom biosciences.
We might have definitely someguardrails about using AI
(28:32):
specifically, whether it's indefense purposes, using AI for
criminal justice, these specificuse cases will have more and
more stringent guardrails.
I think a couple of years downthe line, definitely a lot more.
A lot more what to say, holdingthe feet to the fire sort of
(28:55):
responsibility of the people whocreate the algorithms Because
right now there are no rulesright, because, like, there is
so much ambiguity.
If you are the creator ofalgorithms, you can license your
algorithm and things like that,so who is responsible?
There's no legalities around it.
I think there's a whole new,exploratory way of establishing
(29:16):
legal policy and regulationaround that as well.
Speaker 3 (29:19):
That's incredible, so
for many of our listeners out
there.
So how do we get involved inthe movement around building and
kind of agreement towards anethical AI framework that's
inclusive and has diversitystrategies built into it?
What's your final thoughts onthat?
How do we, as technologists,get involved and stay involved
in that movement?
Speaker 1 (29:39):
No, that's a great
question because I think whether
you're a technologist andactually even a not a
technologist, you have to.
Everybody has to know the risksof AI.
They have to know what are theboundaries of use of ethical AI.
They have to know what are theuse cases of responsible AI and
what are some use cases that canhave very detrimental effects
(30:03):
on society.
So the best ways to follow someamazing people on LinkedIn
there's Elizabeth Adams.
She's very prolific.
Dr Joy Boulamany, who is thewell-known face and name in the
ethical AI space, just came upwith a book.
Please follow her.
And there are many others.
I can also post a link to theethical AI framework I
(30:25):
co-authored for the USgovernment and it's publicly
available.
So that's one resource I canoffer.
But there are a lot ofconversations happening, and
there's a lot of conversationshappening on the Hill as well.
There have been multiple times.
Even Sam Altman has been calledto the Hill in the last one
(30:46):
year.
So just keep yourself abreastof all the happenings on this
topic and follow some of thewell-ings in on this topic and
follow some of the well knownnames on this topic.
Speaker 3 (30:54):
Amazing.
And Swati, where can we followyou so we can still continue to
keep up with you?
I know things are moving sofast work.
Where's the best place tofollow you?
Speaker 1 (31:02):
All the socials.
I am pink in tech.
On TikTok, I am very you know,very proficient and prolific on
LinkedIn.
I post a lot of videos onLinkedIn and I'm also on YouTube
, where I also interview.
Actually, two years back, I dida series of interviews of
people working in the ethical AIspace.
(31:22):
For those of them who are moreinterested in that space can
check out my YouTube channel,also under Swati Young.
Speaker 3 (31:30):
Wonderful.
Swati, thank you so very muchfor joining us on the Tech
Travels podcast.
Your insights into this topicis extremely fascinating.
We hope to have you back onagain.
Thank you for sharing yourvision with us.
It's so amazing to see youleading this charge.
So thank you for all the workthat you're doing and we look
forward to the work to come.
Speaker 1 (31:47):
Thank you so much,
Steve, for having me.
I look forward to hearing fromyou and your audiences about any
questions you have on AI orethical AI.
Speaker 3 (31:56):
Wonderful Thanks
everyone.
Speaker 1 (31:57):
Thank you.