Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Welcome RVA to
Inspire AI, where we spotlight
companies and individuals in theregion who are pioneering the
development and use ofartificial intelligence.
I'm Jason McGinty from AI ReadyRVA.
At AI Ready RVA, our mission isto cultivate AI literacy in the
greater Richmond region throughawareness, community engagement
(00:24):
, education and advocacy.
Today's episode is madepossible by Modern Ancients
driving innovation with purpose.
Modern Ancients uses AI andstrategic insight to help
businesses create lasting,positive change with their
unique journey consultingpractice.
(00:44):
Find out more about how yourbusiness can grow at
modernagentscom.
And thanks to our listeners fortuning in today.
If you or your company wouldlike to be featured in the
Inspire AI Richmond episode,please drop us a message.
Don't forget to like, share orfollow our content and stay up
(01:07):
to date on the latest events forAI Ready RVA.
And we're back.
Today we're talking to BrianIlg, who is an entrepreneur and
AI strategist with diversebackgrounds spanning business
operations, it solutions and AIgovernance.
(01:29):
He currently works as a chiefsales officer with Deploy
Dynamics and Babel AI.
He got his start by running ascooter rental business and made
his way into navigating thecomplexities of cellular IT
partnerships and made his wayinto navigating the complexities
of cellular IT partnerships.
Brian's journey has beenanything but conventional.
(01:52):
His unique ability to reverseengineer problems, identify
inefficiencies and alignstakeholders has positioned him
at the forefront of AI-drivenbusiness transformation, and
today we are exploring Babel AIand their role in responsible AI
.
Thanks for joining us today,brian.
Thanks for having me Jason.
Speaker 2 (02:11):
I'm excited to chat
today about Babel and what we're
doing.
Speaker 1 (02:14):
Awesome.
Speaker 2 (02:15):
Well, can you start
by Maybe you'll have to bring me
back to talk about the scooterbusiness down the road.
Speaker 1 (02:21):
Absolutely I would
love to.
I think entrepreneurs have ahuge role in the future of ai,
so I'd love to get to your, getto the bottom of what you're
talking about with your scooterbusiness.
So in the future, definitely,we will reconnect.
Uh, this episode, let's.
Let's start the audience off bytell us a little bit about your
interest in AI and what bringsyou here today.
Speaker 2 (02:45):
Yeah, absolutely so
clearly, I've dabbled in a bunch
of different things in mycareer and have been fortunate
enough to be connected withBabel AI back a few years ago,
mentoring new businesses throughthe University of Iowa, and I
felt really compelled to jointheir mission and what they're
really trying to do with AI.
(03:05):
So I'm really excited to talkabout what they're working on.
It's kind of a new space andhopefully can bring shed a
little bit of light about whatis responsible AI, how does
Babel AI services help buildtrust within products and
services using AI and,ultimately, how do we innovate
faster with more confidence?
Speaker 1 (03:33):
All right, so then
start us out by telling us what
does Babel AI do?
Why, why?
Speaker 2 (03:38):
independent AI
assurance is so critical to
today's landscape?
Yeah, absolutely so.
Babel AI really cut and dry.
We are a organization thataudits and provides assurance
for AI systems for bias, legalrisk, legal compliance and good
governance and effectivegovernance.
So we are helping organizationsmake sure that they are not
exposed to any sort ofunforeseen risks.
(04:01):
We're also helping build trustin the markets, with new AI
solutions popping up left andright, making all sorts of
claims.
So the function of auditinggenerally in business is
attributed to financial auditingto assure that what people are
saying and reporting istrustworthy and accurate for
market stability.
We are bringing that into thefold of AI responsibility.
(04:27):
We are bringing that into thefold of AI.
We really believe that AIauditing is probably one of the
last jobs that will ever existbecause, as task-based work goes
away, in lieu of AI systems,people are going to need to be
able to manage AI, which is kindof the AI governance component,
and then we're going to need tobe able to verify and validate
that what the AI is doing istrustworthy, hence the audit
function.
So that's where we're comingfrom.
Speaker 1 (04:50):
Interesting
observation you just shared with
us.
You said you believe that thisis the last job that will ever
exist.
Did you think like in futuresense, where ultimately there's
nothing left for humans to dobut manage and monitor for AI
governance?
Is that what you really meantfor that?
Speaker 2 (05:11):
Yeah, you know it's a
futurist point of view.
I think I'd bring that and youhave to kind of have a little
bit of that.
You know what's around thecorner perspective to think
about AI, because it's soexponentially growing in all
these directions.
So, figuring out how to manageAI, as unfortunately, I believe
(05:32):
it's going to take a lot of jobsthat exist today and create a
lot of new jobs tomorrow, thatwill be wholesale different, but
yeah.
I think those new jobs are goingto be more managing AI than
doing tasks, because I think theAI is going to be able to do
tasks, so we want to make surethat those tasks are being done
correctly and with confidenceand trust.
That's what we bring to thetable.
Speaker 1 (05:53):
Yeah, yeah.
When I think about some of therisk and exposure to companies
just throwing AI solutions outthere, it takes you back to one
of the previous podcasts we didon responsible AI.
It's kind of a 101 on that andyou definitely contributed to
that, so thank you.
So your team plays a key rolein AI auditing research, with
(06:15):
funding from IBM's Tech EthicsLab and Notre Dame Research
Center.
I understand what were some ofthe foundational principles you
established and how do you shapeAI governance today with this?
Speaker 2 (06:30):
Yeah, so our company
was founded back in 2018 by our
CEO, shea Brown.
He's a professor ofastrophysics, machine learning,
and we have a lot of other legaland researchers underneath the
team.
That started as kind of ourcore research team.
So we were funded by grantresearch on how to provide
(06:50):
assurance for AI and our wholeidea was kind of stemming from
that thought of we need to knowhow to manage AI into the future
because it's going to do bigthings and we want to make sure
they're doing things correctlyand there's trust.
So our whole research and wepublished this there's three or
four, I think there's three orfour.
I read three.
(07:11):
I think there might be a fourthon this specific topic of AI
auditing.
So we really kind of break itdown into three sections and we
can get into this a little bitlater and kind of how this all
comes together.
But governance is superimportant.
So who's accountable and whatare the desired outcomes?
What are in your policiesthat's important for making sure
(07:33):
that AI is trustworthy?
Right, we have to know wherewe're going so that we can make
sure that you know we can walkit all back towards, kind of the
baseline.
Second would be you know whatare your metrics that you're
using to really define what isright?
How are you tracking those andwhat risks are associated with
those?
So what might be not beingcovered, or are those metrics
(07:56):
producing outcomes that areunintended?
And then the third thing ishave you tested it?
Can you reproduce the testing?
And is that testing you knowconsistent?
And is that testing consistent?
And that goes into pretty muchwhat we're calling our AI audit
card today, which is just asimple report, simply digested
report, not simple to producefor people to look at and see
(08:19):
what's going on in these systemsand hopefully not hopefully but
assure trust within the systems.
Speaker 1 (08:26):
Yeah, definitely not
a simple thing to pull together,
for sure.
So tell us a little bit aboutyour thoughts on why is AI
governance such a challenge fororganizations today and what are
the most common gaps you see inAI risk management.
Speaker 2 (08:43):
Yeah, I'd say why
it's a challenge is it's just
new.
I don't think governance orregulation or even you know,
small internal task force, ifyou will.
That's kind of a way to thinkabout it.
None of that's new to business.
It's just what AI is capable ofdoing is so tremendously
(09:03):
different from any technologyahead of it.
We're really talking about insome of the conversations
because we do talk tophilosophers what's the purpose
of humans?
These are very new questionsfor business.
They're very new risk profilesand a proper governance really
is for ai and the unique risk itreally takes a different
(09:25):
approach.
Um, it's not something that'sbest served by a single person.
It's generally a committee,lots of diverse thought,
analytical, critical thinking.
Um, does that align withstrategic missions and values?
So for organizations that arekind of winging it and trying to
throw AI in, you can see wherethat doesn't align best and
(09:46):
there's all sorts of riskswithin people not being very
deliberate and thoughtful andintentional about the use of AI.
So I think the move fast andbreak things model is
traditional SaaS.
I don't think that works superwell with all AI until you get
your governance right.
Speaker 1 (10:05):
So just different
super well with all AI until you
get your governance right.
Just different, I think.
I agree, corporations move fastbecause they're on a time
budget.
If they have the innovation,the technology to make change
and disrupt the industry andgrab the competitive advantage
(10:26):
advantage, they're going to andthey're not going to wait until
governance catches up to thembefore they deploy it.
They're going to put it outthere and they're going to see
what breaks, make a bunch ofmoney and then, when you know
governance comes around andreels them back in or reigns
them in, then they accept theirfate and it, you know, puts
(10:50):
another barrier to entry in theway.
But that's the world we operatein.
Innovation happens too quick,right.
Speaker 2 (10:57):
So, yeah, I think
that's yeah.
It's a good way to say it andnot to get too off into that
topic, but I do think there'squite a bit of opportunity for
people who don't play the gamethat way.
I think there's a lot ofenterprise talent that's
floating around out there and,you know, with a lot of strategy
and thought and the use of AIto do a lot of task based things
(11:23):
, there's a lot of companiesthat will be able to get it
right first and then, when thosecompanies make those mistakes
because they're not payingattention or they're playing by
old rules, they won't be able tojust say oh well, that's the
cost of doing business.
There'll be someone else with abetter solution that people
will move to, because I do thinkthat there's that tension in
the market right now and thatopportunity.
(11:43):
So I think how I like to sellgovernance to people to help
them buy in is like, if you dothis right and you have the
right trajectory and aim foryour organization, then can
demonstrate that aim throughassurance or trust or anything
along those lines.
That's who's going to win inthe future of business.
I do think that there's achange of the guard in terms of
(12:04):
business strategy Interesting.
Speaker 1 (12:06):
Yeah, you're stating
that the old ways aren't going
to last much longer and peopleare going to catch on and
recognize that those that arebeing responsible in these areas
and creating trustworthyapplications with this
technology are going to be theones that stand out, because the
rest of them are in it for abuck.
Speaker 2 (12:28):
I think so, and that
just goes back to people and
this is kind of ethics, if youwill.
I know that's a reallysensitive word in this space,
but you know, kind of the ethicsare those?
What are those undeniabletruths that people have?
And you know, the things thatpeople always seek as value and
consumers.
Their value are things likesaves me money, saves me time,
(12:51):
reduces my pain, and then alsojust it brings me ability to
grow and peace of mind, and Ithink a lot of business models
today tend to not do the maximumamount that they could to solve
those, and I think there'sopportunity there.
But that's more about intentand desired outcome of your AI,
(13:13):
which, again, this is stuff thatBabbel does routinely.
When we work with organizations, we can walk in and this is
kind of goes back to our auditcard and our assurances.
So we start with governance,like who's accountable?
What are your desired outcomes?
What does right?
Look for you.
And I think, um, this is whereregulation has been, this very
(13:36):
like tipping point conversationwhere it's like are you, is it
just regulation that you'rereally, you know, concerned
about or not?
But now that all the uhregulation kind of getting the
teeth ripped out, it really justboils down to what's the
business outcome you're tryingto achieve, and that's where we
focus with our customers whatdoes right, look for you, and
then we make sure that that isbeing delivered.
(13:58):
So are you mapping it with theright metrics?
Are you aware of the risks thatmight prove that that isn't
going to be where you land withyour desired outcome because of
the metrics?
And then you know are wetesting it?
Is it actually delivering?
So is the system itself builtin a way that doesn't have, you
(14:19):
know, things like model drift orhallucinations and those types
of things.
Not everything's going to beperfect, but is it within your
risk tolerance too?
So those are all reallyinteresting conversations that
we have with our customersroutinely.
And yeah, I think you know,from a business entrepreneurial,
you know cowboy point of view,I guess, is kind of where I land
(14:41):
these days is I do think thatthere's a better way to go about
doing that.
Speaker 1 (14:46):
Yeah, and naturally
people want to do the right
thing, so yeah, that does soundgood Makes perfect sense to me.
So tell me.
There are many organizationsthat adopt AI solutions without
clear governance or riskmanagement.
So what are the biggest blindspots that companies have when
it comes to AI risks?
Speaker 2 (15:20):
expert.
I'm just regurgitating here.
But you know there's a lot ofdecisions that historically have
been subjective, like humansmaking the choice of A or B, but
now and that's been acceptableas all right well, we trained
our staff to make decisions in away, but now they're quantified
decisions, so 37% of the timecustomers go this way, the other
ones go this way.
(15:40):
Can you defend that legally?
Why was that decision made?
This is a whole new ballgame andwhat I think a lot of
organizations are doing whenthey go quick boom, boom, boom.
I want to get AI in, know howto build the systems.
(16:02):
And or maybe it's built intothe code in such a way that
they're putting quantifieddecisions that potentially could
implicate them legally intoengineers hands rather than
really bringing thatcollaborative team together to
say, hr legal engineer, why?
Why are we making this decisionrate for resume parsing, you
know, in an HR hiring experience, right?
So that's a law that exists,but at the end of the day, you
(16:27):
know there is bias in everydecision, but we just have to be
able to back it legally andthere's existing laws that have
preceded anything that have everbeen for AI, and you know we
just have to make sure.
Organizations have to make surethat they're paying attention
to that stuff.
So, you know, we just have tomake sure organizations have to
make sure that they're payingattention to that stuff.
So yeah, there's many, manymore examples of that, but that
may be a good to think about.
Speaker 1 (16:48):
Yeah yeah you
mentioned it a couple of times,
I think is the diversity ofthought.
I think that, by and large, ifyou can put diversity of thought
into the equation, If you canput diversity of thought into
the equation, you'll get most ofthe way there right.
Yeah yeah, Different peopletrying to solve the same problem
together in the same room.
You will do the right thing, Ibelieve.
(17:12):
So thinking about these risks alittle bit more.
What are some of the biggest AIrelated legal, reputation or
financial risks that businessestend to overlook?
Can you share any real worldexamples of failures?
Speaker 2 (17:27):
Yeah, I think some
that I like to call out that are
pretty common from a riskprofile.
So existing enterprise riskmanagement systems or risk
management frameworks in thefinancial industry, for instance
, they're really robust andthat's a really, in their words,
(17:48):
a lot of red tape, a lot ofregulation that they've got to
sort through for good reasons toprotect consumers.
But AI brings a whole new riskprofile into the fold.
So assuming that those reallyrigorous risk management
frameworks from before AI we cancall it are good enough, I
think is a real big gap inknowledge.
(18:10):
There's a ton of new riskprofile elements, like we just
mentioned.
So a different one outside ofthe quantified choices.
Just your procurement policies,your procurement rigor.
So how are you inspecting ifyou're bringing, like a paid
service into your organization?
How are you bringing andvalidating and testing the
(18:32):
claims, the accuracy, all ofthose things?
Because once you employ thattool into your workflow, pretty
commonly we're seeing thatyou're materially changing the
intended use of the terms andconditions of that tool, which
now passes the liability for anysort of error onto the
(18:53):
organization who's employing thetool.
So just even having like weakprocurement rigor is and
policies that don't inspectcorrectly opens you up legally
to unintended use or unintendedoutcomes, because if you're in a
bank, this is a different wayto think about it and you're a
teller, are you actually like ifa customer said how are you
(19:16):
making decisions using AI withmy money?
Are they able to explain it alltoo?
And that creates new frictionbetween consumer and banks.
And those are conversationsthat, if you're not buttoned up
with policies to defend andtraining and those things, those
become real challenges forinstitutions and organizations
to speak to, and that opens uprisk.
Speaker 1 (19:39):
Yeah, transparency
and explainability with the
latest technologies are some ofthe hardest things to overcome
in establishing new innovationsaround these tools and use cases
.
I get that Yep.
So many companies adopt AIwithout clear oversight.
Tell us a little bit about thedangers in that, and how does
(20:00):
Babel AI help organizationsestablish the proper governance?
Speaker 2 (20:05):
Yeah, so we help
organizations all over the place
with respect to where they arein their AI governance journey.
So we're aware thatorganizations are kind of in a
messy state right now by andlarge, there's someone that
really have it but what we do iswe help organizations by doing
AI governance gap analysis.
That's always the best place tostart, and how we do that is
(20:28):
we'll go in and every single AIuse case or system is unique.
So we come in with our team ofexperts, we go through discovery
where we just really get a goodscope of all of the inputs, the
decisions and desired outcomesof the system.
And that starts with the thingsI mentioned, with governance
(20:49):
and interviews and just gettingto know folks From there.
We do a side-by-side gapanalysis and risk review.
So as we identify gaps, we rankthat risk and we do this so
that we can help ultimately mapthe biggest impact items, the
shortest time windows to resolvethem, and we're trying to map
(21:10):
those towards a risk managementframework that does cover those
unique AI risks.
So that's also part of thediscovery.
So when we talk toorganizations, ultimately that's
delivered in a report and thenwe're leaving organizations with
the decision on how they wantto move forward, we can continue
to help or they can take thatback internally, but we have
everything buttoned up at theend.
(21:32):
I was going to mention just realquick for risk management
framework be something like ifyou're just doing business in
the United States, that could bethe NIST AI risk management
framework.
If you're more global, youmight want to map those risks
towards, like the EU AI Act,which is a little more rigorous,
but just differences of geo,where the business is conducted,
(21:52):
where the AI is being used.
So we help organizations get toagain what's right for them.
So it's.
You know it is a custom process, but that is formulaic in our
approach.
Speaker 1 (22:03):
Cool, cool.
So what are some of the mostoverlooked aspects of AI testing
?
Are there any industries thatare getting it right, setting a
good example for everyone else?
Speaker 2 (22:19):
Organization by
organization might be the better
way to say, instead of industryby industry.
But one of the things that wesee is user testing, beta
testing, ensuring that you knowend users understand what's
going on.
You know it's a rushed market.
Get out there, and I've I'vebeen on the receiving end of a
poorly trained user at adoctor's office.
I went and I had to fill out aform that says we, you know,
(22:41):
we're using Microsoft Copilot todo whatever with your
historical notes.
Do you comply?
And I like to test everythingin the real world.
So I went up to thereceptionist.
I'm like can you tell me whatnotes actually are being going
to be processed?
Or like what the function is orthe purpose or any of that?
And they looked at me like Iwas speaking Greek.
So I mean, the person whohanded me the form to sign
(23:08):
couldn't even explain roughlywhat was going on with the data,
and that's some real disconnectin terms of mission and end
user.
So we look at the comprehensiverollout.
These are training processdocumentation.
It's really kind of that stuffthat is just kind of like drips
off into nothing in a lot oforganizations, I think, and that
really affects the quality ofthe AI adoption, was talking to
(23:31):
me and he said it's reallyconcerning for the investment
community VCs, private equitybecause what they're seeing with
their early AI investments isthat they've been able to secure
, like MVP contracts, but theadoption rate for the year two
(23:53):
contract like the big re-up isreally failing at a lot higher
clip than traditional SASmetrics, because the users are
completely clueless on how touse the tool.
They're kind of being leftalone, so that user testing
isn't just a bad outcome, it'saffecting investment.
So there's a lot of pressureright there.
It's interesting Reallyoverlooked.
Speaker 1 (24:16):
Yeah, I need to go
back to the receptionist that
was asking you to fill out theliability form on the use of AI.
That's not a great way toestablish trust with your
customer, that's for sure.
So shame on them.
What you just said about thesecond year use of their
(24:37):
services, it strikes me as a yes, a user training issue, which
is an interesting one, like yousaid.
Is it because the users aretold to just adopt this new
technology and start using it onyour own and go learn from it?
They're not investing the timeinto the human aspects of
(24:58):
leveraging these technologies,or is it something else?
Speaker 2 (25:02):
There's existing
processes in place, so there's
workarounds in some capacities.
Right, they can go back toolder ways of doing things.
Maybe those are left in placeas a fail-safe or redundancy,
but instead of taking thingscertain ways.
So I didn't get into a greatuse case with that, but it was
just kind of a broad statementfrom someone who is in that
(25:25):
investor space, has had nicewins in Silicon Valley, works at
a large consulting firm andthis is a concern that they see
in the market for investment.
Speaker 1 (25:36):
Well, I could
definitely see how, if you give
people options, they're going tochoose the one with the least
resistance.
So if you're going to go for it, you need to burn the ships,
you need to take out the oldways and say this is the new way
You're going to learn thisbecause we believe in it.
You know it's like giving themtwo different Word document
(25:58):
editors.
Right, that's exactly right.
Stop using that old one in it.
You know it's like giving themtwo different Word document
editors.
Speaker 2 (26:01):
right, right, that's
exactly right.
Speaker 1 (26:02):
Stop using that old
one.
We're going to keep it on yourPC, but don't use it anymore.
We want you to use this new onewith the AI technology in it,
and start using that AItechnology, because it's going
to help you, you know, do yourjob better, be more creative and
save you time.
But it may not be as intuitiveat first, right?
And so they say, oh, I'm notsure about this.
(26:23):
And then they give it a shotand then they're like, oh man, I
just wasted an hour.
I'm going to, I got to get thisout, you know.
And it's like, okay, that'sfine.
But you know, sometimes you dohave to look at your opportunity
ahead and invest a little bitmore time and burn the ships
before deciding that it's easierto go back.
Speaker 2 (26:42):
Yep.
So invest in that user testingand invest in great training,
and I think those are commonlyaccepted best practices that
change management anyway.
So why organizations skip thator deprioritize that is always
strange to me, but it's reallyhitting in a lot of different
places.
So there's a lot of opportunitythere.
Speaker 1 (27:05):
Yeah, so your team
pioneered AI auditing research.
What were the key findings andhow have they shaped today's
best practices in AI assurance?
Speaker 2 (27:16):
Yeah, so probably the
best way to say it.
Like I said, we publishresearch.
We have our audit card, whichis ways that we're helping
businesses.
You know a couple I think itwas a couple hundred agencies,
maybe 200 agencies producingresearch, but that is probably
(27:50):
the most commonly accepted riskmanagement framework here in the
United States in terms of bestpractice, and you know we're
happy to be you know, acontributing researcher group in
that.
So we've developed a greatnetwork of researchers and
thought leaders to produce that.
Overall, though, we're also partof the International
(28:10):
Algorithmic Auditors Association, which is a global network for
unified auditing standardsunified auditing standards so
our CEO sits on the board ofthat.
He's a board member of anorganization called For Humanity
, which is working towards AIfor good, ai, people first,
human flourishing, these typesof outcomes and we also
participate in different thinktanks with universities and from
(28:36):
our research, we also areapplying that daily with our
assurance.
So we're out at the leadingedge of some of the use cases,
agents, llms, more difficult,challenging things rather than
just a basic.
What's the statistical biaswithin this decision, over this
decision?
We're out here trying to solvesome of the more complex
(28:56):
challenges daily withorganizations.
So we research all the time.
It's a core function of ourbusiness and we've applied it in
different ways, many differentavenues.
Speaker 1 (29:06):
That's very cool.
You all have your hooks into alot of different areas.
I work with a lot of smartpeople.
Speaker 2 (29:11):
It's pretty awesome.
Speaker 1 (29:13):
Indeed, it is.
So how do you measure AIsuccess beyond just performance
metrics, and what shouldbusinesses be considering when
evaluating AI systems?
Speaker 2 (29:24):
Yeah, that's a really
good question and you know it's
not a punt.
But every organization hastheir own desired outcomes with
AI and what Babbel's here to dois we're not strategy, we're not
change management, we just areaware of how important those
things are, but we help assurethat those outcomes are there.
So when you're saying metricsbeyond maybe just profit, those
(29:46):
are things like human impactvalue generally, and people have
always the term ethical use ofAI is always something that kind
of gets mystified.
So I went because we're here inthe state of Virginia.
This is for AI Ready.
So if you're still listening tothis episode, this is all from
the VITA policies on what isacceptable ethical use of AI.
(30:13):
So this is how the state ofVirginia is actually measuring
and their guiding principles forAI use and just to kind of grab
a couple of points they want AIto be trusted, safe, secure and
acting in a responsible,ethical and transparent manner
for implementing AI.
So it must be validated byhumans for bias and unintended
(30:39):
consequences and all of theirdepartments, agencies, offices
are ensuring that.
You know things are explainableand you know, accountable and
resilient.
So these are other metrics thatare kind of more idealistic
states for a lot oforganizations Like, yes, we want
to be, you know, positive foreverybody.
(30:59):
What is that?
So that's where we go back into.
You know your desired outcomekind of defines, kind of the
North Star of the metrics, andare your metrics really
producing that?
So performance metrics, I thinktypically are tracking back to
close rates and you know ROI andthese things, and I don't think
those things are even.
(31:19):
They're incredibly importantfor capitalism and business
generally, which I support.
But there is kind of this otherlayer now.
And how do you put metrics to?
Those are things that ourorganization can help
organizations figure out,because a lot of those are more
philosophical.
And how do you take data andturn it into more philosophical
(31:42):
outcomes?
That's really challenging.
But that goes into our testingand evaluation, verification,
validation process and withinthat we can pull.
So one organization within thehiring process was stuck in a
(32:04):
procurement challenge andthey're like my company, my
customer.
Rather they really want to knowif we're producing bias with
English as a second language forcandidates applying.
So we work with them to come upwith the right kind of cocktail
of metrics to produce thatreport.
(32:24):
So that's how we helporganizations on that side.
There's all sorts of metricsthat go into that.
So that's a wide conversation,that's a whole podcast, for sure
.
Yeah.
Speaker 1 (32:38):
You've brought up
philosophy a number of times in
recent questions, and anyphilosophy stands out to you,
like one that you said earlierwas what is the role of the
human being in a world where AIexists, exists Something?
(33:01):
I probably misquoted you there,but that kind of stands out to
me as really profound work,right?
Have you come across any otherquotes?
You could say that you knowinteresting thoughts there.
I love philosophy, by the way,that's why I do this, you know
it's.
Speaker 2 (33:14):
I don't know if I
have anything specific and
offhand there.
You know, I think I have my ownpersonal philosophy for winning
in business and I think thatit's probably not specific to
the Babel use case.
I'd love to maybe come back onand talk about it down the road,
but I just think that, you know, working towards people first
(33:35):
outcomes and really being ableto define what a people first
outcome is is kind of thisethical AI, if you will, or
ethical use case, however youwant to call it.
But yeah, there's the problemsthat exist in this world, aren't
?
How many emails produce a closerate?
The problems in this world areare we going to have, you know,
(33:58):
things like food for everybody,water for everybody, environment
that you know?
Doesn't, you know, attack usand ruin our houses on the coast
?
Right, we want to have a beachhouse that's nice and that's not
going to get wrecked by climatechange.
So I think those are newbusiness outcomes that people
should be striving for.
That's my own personalphilosophy and I do think that
(34:20):
AI does give the opportunity topursue those solutions.
And if you can aim AI towardsoutcomes that either reduce the
impacts of those, whether I'dsay start at the root cause, not
addressing the problem, theresults.
That's going to get people intobetter places across the globe.
(34:41):
So that's my personalphilosophy.
Speaker 1 (34:44):
I don't know if
anyone else has that.
If you start there and keepthat as your North Star, then I
don't think you can go wrong.
Honestly, this philosophy talkmakes me want to think about the
future.
So tell us, Brian, what's nextwith AI oversight and governance
.
Speaker 2 (35:05):
Yeah, I think there's
a lot of really great
governance tools that exist.
So, when you think about AI atscale, how do you manage at
scale on a daily?
It's making a lot of decisionsfor people.
So there's a lot of new tools,new industries emerging, new
software to be sold in there.
But I think that is, you know,just a little slice.
(35:26):
Overall, I think people and wetalk about AI literacy,
workforce transition more peopleneed to understand how to
govern and manage AI, and Ithink the people that really
invest into those skills are theones that will be leading the
future of business, just likeanybody would for, like, soc 2
(35:47):
or an ISO standard.
You know these arewell-respected certifications
and the enterprise you know hasvalidated as necessary to do
business, and AI assurance or anAI audit is going to fall right
in line with some of those sameprinciples.
(36:09):
So I would say AI governanceshould be a commonly accepted,
you know, workforce role and AIauditing should be as common as
a financial audit in terms ofthought of business function.
Speaker 1 (36:24):
So, yeah, a lot of a
lot of stuff coming that way
about this before we can move on, and that is it feels to me
like the conversation is aroundeither creating this governance
role in your organization ormoving people toward the
governance role.
But what about the people thatare doing other jobs that should
(36:45):
also be gearing up forunderstanding AI governance and
the tools and being able to havethe conversations in the
diverse focus groups aroundsolving the problem?
What do you think they need toknow as tangential roles in this
equation?
Speaker 2 (37:05):
Yeah, that's a good
question.
How I like to think of AI andworkforce transition is
everybody's kind of been.
There's a few people that areout in front and it is a few,
even though and you know kind ofecho chambers of people with AI
.
Some organizations have zeroclue of what's going on, they're
just aware that it exists.
So everybody, in my opinion, isstarting relatively in the same
(37:29):
place, kind of at an entrylevel knowledge.
I would say every single personin the workforce today
understands what it is they doreally really well right now and
just understanding how AIfunctions.
So this is part of educationthat we bring kind of the
principles of AI, how we justfind the socio-technical
algorithm.
(37:49):
So there's the human, there'sthe machine, it's a workflow and
it is now all math and done byrobots and agents, like
immediately, which is crazy, buteffectively it's just workflow
design.
It's thinking about businessprocesses and automating them
quickly and just understandinghow it works.
(38:10):
Understanding how it works,then you can start to think
about it more strategically ofhow do I maneuver myself, how do
I position myself to either bea key human in the loop or be
above and managing that systemand that workflow or the product
, and everybody has that unique,diverse perspective which makes
(38:32):
them valuable if you understandhow ai works.
So that's how I like to thinkabout it.
And then, whether you arestaying kind of ground floor or
you want to move and ascend intomore leadership roles with ai,
I think it all is is there, um,and then ultimately, like we
said earlier, in the pod, it wasuh, you know, ai auditors are
probably the last job on theplanet, um a degree Future state
(38:56):
.
A lot of time to get there.
Speaker 1 (39:01):
Let's round it out
for our listeners.
We've got business leaders inthis episode thinking about what
they should do.
What is the first step that youwould recommend they take to
ensure they're using AIresponsibly?
Speaker 2 (39:16):
Yeah.
So if you're just hearing aboutAI governance right now it's a
new concept I would just taketime to read some basic
information.
There's so much out online.
There's some really good peoplethat are out on LinkedIn.
So one person that I reallyrecommend.
He has a great book.
(39:36):
It's called the ethicalmachines by Reed Blackman, Our,
where our companies are familiar.
He's a.
He's an ethical consultant.
But just the first chapter ofthat book, man, it's punchy,
it's good, it's sharp.
It explains how governancestructure and governance content
come together and just reallydraws out some really basic like
(39:59):
aha, I understand where thatcan go wrong.
So it kind of helps frame yourthought.
So if you're new to it, Ihighly suggest that book.
I put my sales team part of mytraining curriculum with them as
that first chapter so shoutouted and our ceo definitely uh,
helped out with some of theideas around that and fact check
that.
I learned that after I read thebook.
(40:20):
Oh, nice um yeah, so that wascool to know.
But yeah, reed's a good dude,um, and then you know if, if, if
not, um, you want to go.
That's how you kind of start.
That's what I would suggest.
If you want to go a little bitmore in depth, babel AI.
You know we're a leader in thisspace in terms of audit and
assurance.
We have our own coursework, soyou could even, you know, look
(40:41):
behind and learn how we'reinspecting these systems and how
we think about governance froman assurance and audit function.
So, if that's more attractivethinking about, we have a whole
AI auditing certificate programand governance is just one of
the five courses of that.
So we have all sorts of room togrow.
(41:01):
But within, that is kind of theprinciples and a simple
checklist of how to build yourown governance system.
But yeah, there's a lot of goodinformation out there.
Those are kind of my two biasedopinions, but there's a lot of
great information, a lot of goodpeople working on air
governance right now.
Speaker 1 (41:18):
Cool, I love to
listen to a good audible book,
so awesome, all right.
And lastly, brian, what ifthere's one thing that you hope
our listeners take away fromthis conversation?
What would it be?
Speaker 2 (41:31):
One thing Maybe this
is a different angle to think
about all of this, so maybe thisis a new perspective.
But AI governance is importantand what does it really do for a
business?
If you have your governance, youhave your policies, you're
constantly making sure that andtesting and doing things to that
, you're better able to maintainyour AI trajectory, your
(41:52):
systems and all those, and it'smuch easier to maintain than to
fix a big problem.
If you think about a carchanging your tires versus a
blowout right, going to thedoctor for your routine checkups
versus a big diagnosis thatjust didn't go address those are
kind of the dichotomies that Ilook at AI governance with.
(42:14):
So if you have AI governance inplace, you're maintaining,
you're tracking, you're managingversus.
I'm just kind of out herefreewheeling and hoping that
nothing bad happens, but when itdoes, it's going to be bad and
it is an investment.
And I say invest into themaintenance rather than invest
into the big fix, because thebig fix is more expensive and
(42:36):
has so many other risksassociated with it, and I think
it's a necessary thing thatorganizations have to reduce
their overall liabilities andpotential brand impacts with AI
and, trust me, there's lots ofuse cases that you can go out
and look where people get itwrong and there's class action
(42:57):
lawsuits that are already movingand it's there's a lot.
So invest into it and take itseriously and put yourself in a
position to innovate and winlong term.
You know, I think that's whereAI governance does for
organizations.
Speaker 1 (43:12):
Well, I for one have
been convinced this is the real
deal and I'm going to take itseriously.
So I would love to explore moreof that content by Beverly in
the future, personally, Allright, last question for you,
Brian.
If you could wave a magic wandand have AI do anything, what
would it be?
Speaker 2 (43:30):
Man, that's a good
question.
You know, I think that I would.
You know, I'm a futurist, aneternal optimist, but a realist.
I would love for people todeploy AI in a way that didn't
just prioritize profitableoutcomes and instead just kind
(43:50):
of people-first outcomes thatproduce profit, and I think
there's just a lot more, andwith that I think we can solve
the world's problems right.
So I always tell people youknow, hey, what are you up to?
I'm trying to change the world.
So I think everybody hopefullybuys in and use AI to create
valuable outcomes thatprioritize people.
(44:11):
I think that's what I wouldlove AI to be able to do.
I think that does createpurpose for people.
We have a lot of problems tosolve and there's a lot of work
to get done, but I think AI isthe magic wand to achieve it,
unlocking a little bit ofcreativity in there too.
So I'd love AI to solve theworld's problems.
How's that?
(44:31):
There too?
So I I'd love AI to solve theworld's problems.
How's that?
Speaker 1 (44:34):
Yes, I'm with you
there, I I think it does bring
purpose to focus, you know, yourlife's goals around solving big
problems for humanity's sake,you know, for local community's
sake.
Whatever it is, however youreach, it's a, it's a beautiful
thing, yeah, I think.
I think most people want tomake a difference.
Speaker 2 (44:57):
They really do.
I think AI is an incrediblypowerful tool and, aimed in the
right direction, can really dosome cool stuff.
That's what we try to do atBabbel Help people get their aim
and their trajectory and makesure that they're getting what
they want out of their ai and,uh you know, applied for the
(45:18):
right reasons, I think thosebusiness strategies win long
term and it's going to be goodstuff rocky road, but we'll get
there.
Speaker 1 (45:26):
Yep, you've convinced
me.
The future is responsible.
Ai, that's awesome, I can sleep.
I can sleep better at night now, brian, thank you.
Speaker 2 (45:35):
Man, I wish I could
say that for myself.
So I'm happy to see it.
I'm helping change the world.
I get Jason to sleep a littlebit better at night.
I still am up on chat GPT allnight talking about it.
All right, all good stuff.
All right, man.
Speaker 1 (45:51):
Thank you for your
time today and I look forward to
talking to you again in thefuture.
Speaker 2 (45:55):
okay, yeah, likewise,
Jason.
This was great Cheers.
Speaker 1 (46:03):
And thanks to our
listeners for tuning in today.
If you or your company wouldlike to be featured in the
Inspire AI Richmond episode,please drop us a message.
Ai Richmond episode.
Please drop us a message.
Don't forget to like, share orfollow our content and stay up
to date on the latest events forAI Ready RVA.
Thank you again and see younext time.