All Episodes

August 12, 2025 47 mins

Allegra Guinan of Lumiera helps leaders turn uncertainty about AI into confident, strategic leadership. In this conversation, she brings some actionable insights for navigating the hype and complexity of AI. The discussion covers challenges with implementing responsible AI practices, the growing importance of user experience and product thinking, and how leaders can focus on real-world business problems over abstract experimentation.

Featuring:

Links:

Sponsors:

  • Shopify – The commerce platform trusted by millions. From idea to checkout, Shopify gives you everything you need to launch and scale your business—no matter your level of experience. Build beautiful storefronts, market with built-in AI tools, and tap into the platform powering 10% of all U.S. eCommerce. Start your one-dollar trial at shopify.com/practicalai

Register for upcoming webinars here!

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Jerod (00:04):
Welcome to the Practical AI podcast, where we break down
the real world applications ofartificial intelligence and how
it's shaping the way we live,work, and create. Our goal is to
help make AI technologypractical, productive, and
accessible to everyone. Whetheryou're a developer, business
leader, or just curious aboutthe tech behind the buzz, you're

(00:24):
in the right place. Be sure toconnect with us on LinkedIn, X,
or Blue Sky to stay up to datewith episode drops, behind the
scenes content, and AI insights.You can learn more at
practicalai.fm.
Now, onto the show.

Daniel (00:48):
Welcome to another episode of the Practical AI
podcast. This is DanielWightnack. I am CEO at
Prediction Guard, I'm joined asalways by my cohost, Chris
Benson, who is a principal AIresearch engineer at Lockheed
Martin. How are doing, Chris?

Chris (01:05):
I'm doing very well today, Daniel. How's it going?

Daniel (01:08):
Yeah. No complaints. It's been an interesting week of
AI things with, I guess, OpenAIbeing open again. I'm sure we'll
talk about that on this show atsome point. But may maybe
there's a maybe there's aconnection there to responsible

(01:28):
practices.
But I'm I'm really excited,today because I've seen a lot
of, things popping up on our onour friend's channel, Dimitrios
over at MLOps community, fromAllegra Guinan, who is cofounder
and CTO at Lumira. Welcome,Allegra. Great to have you here.

Allegra (01:49):
Thank you so much. Great to be here.

Daniel (01:51):
Yeah. Well, I kind of alluded to responsible AI
things, which is certainlysomething I'm sure we'll get in.
But it may be useful just tohear a little bit of your
background and kind of how yougot into what you're doing now,
which I understand is advisingbusiness leaders around AI

(02:13):
principles and responsible AIpractices and strategies and
that sort of thing. Correct meif I'm wrong, but yeah, would
love to hear a little bit ofthat background and kind of how
you arrived to doing what you'redoing right now.

Allegra (02:26):
Yeah, for sure. I have sort of an unconventional
background and path towards CTOand co founder. It's definitely
not something I ever thought Iwould land at, but happy I did.
I got into the tech scene overten years ago now. I was living
in San Francisco.
I'm actually based in Lisbonnow, similar cities in some

(02:46):
ways, but I was in the heart ofthe tech scene by default. I had
actually studied in the arts andit was a different path than I
expected. I started working at astartup that was doing three d
visualization that was quiteahead of its time. I was working
on the data platform team there,so I worked a lot with backend

(03:07):
engineers and figuring out whatour data architecture would look
like essentially for this threed visualization e commerce that
was working in the interiordesign space. That was obviously
very new to me as my first techjob, and I had started in more
of an ops role and then movedquickly into product management.
I was building out internaltools, which is why I was

(03:29):
working so closely with thoseteams and focusing then a lot on
search optimization, figuringout which words would yield
which results, couch, sofa, youget the same sort of thing in
the end. Seems very basic nowespecially, but back then it was
a lot of work. I got reallyexcited working with data teams
and it kicked off this interestand passion in a way for

(03:51):
technical projects. Then I movedover into the FinTech space. I
was working at Chime, whichmaybe you've seen them in the
news recently.
I was also working on the datateam there and really saw at
scale. So I was a part of thetechnical program management
team. So I was managingtechnical portfolios with a lot
of different programs, usuallylonger scale and multiple year

(04:14):
data programs, and figuring outhow to get real time data to
those who needed it within theorganization. That often meant
ML engineers. That was my firsttouch point in the ML space,
working a lot on fraud andsecurity within the finance
space, which was also extremelyinteresting.
I got further into this dataengineering world and I built up

(04:38):
this dev advocacy within me, andI really loved working with
those teams. And so then Ishifted again to Cloudflare, so
I was working as a technicalprogram manager there. It was a
multinational or is amultinational organization, of
course, so a bit bigger scaleagain than I was working on
previously. I got to then getdeeper into a lot of AI

(05:00):
initiatives across theenterprise. That was my path.
I met my co founder at Lumira,Emma, who's the CEO, around four
years ago on a beach in Lisbon.And we didn't realize we had
this similar interest in the AIspace and in technology. And
then many years after we alreadybecame friends, we realized

(05:21):
that. She had already had anorganization around consultancy
and business strategy. We cametogether and formed what is now
Lumira about a year and a halfago.
Really, it has grown over time,especially as the space is
shifting so much with technologyand AI. But as you mentioned, we
are really trying to guideleaders in making more

(05:43):
responsible decisions aroundthis new era of technology,
which is how we got the name,Loomy for Light, Era for Future,
so a brighter future. It'ssomething that I'm really
excited to be a part of and tobe leading in. And I'm glad that
my voice is sort of starting tocome across the ecosystem in
various places.

Daniel (06:04):
That's great. When you were talking, I was thinking
back to a conversation Chris andI had, I don't know if it was a
couple episodes ago, but we weretalking about the services
industry kind of more broadly.And it it does seem like there's
such a need for good, you know,responsible strategy and insight

(06:26):
from the services standpointaround AI. But also there's kind
of, like, on the other end ofthat AI eating up some of the
maybe services industry aroundcertainly around whether it's
marketing agencies or maybeprototyping new projects and
that sort of thing. So what isit like, I guess, being part of

(06:48):
that services industry in thisage of AI?
And how do you see kind of therole of advisors, service
providers either shifting or Idon't know. What are your
thoughts around that in terms ofthe best value that even
services companies can provideduring this time where maybe

(07:12):
certain areas of what theyprovided before is getting
gobbled up by some of the thingsthat AI is providing?

Allegra (07:20):
Yeah, there's a lot in there. So I'll start with sort
of the challenge of being inthis space, and that is that
we're not offering a silverbullet. We're not offering a
tech product that will solve allof your problems. We are not
just handing this over to you asan answer. We're really
developing leadership.
So our core product is aexecutive education program that

(07:43):
spans eight weeks. And we arecovering all of the challenges
that we're seeing right now inAI that are not necessarily
technical. It's a very humancentered approach to technology.
Of course, it's easier andpeople want something that's
fast and they just want theanswers. What should I build?
What should I buy? How do I dothis? And then can I pay you to

(08:04):
do it for me? We're reallyoffering a counter narrative to
that and instead asking you as aleader, you have already built
yourself up, you're leading thisorganization, however big it may
be. What can you do now to makethe right choices instead of
offloading that onto somebodyelse?
It's really testing folks andit's putting them up against a

(08:25):
wall sometimes to be a betterleader. That's a personal
choice. Not everybody wants toinvest time in being the best
version of themselves, or maybethey're tapped out and they
don't want to do this anymore.That could also be the case. But
we're working with the ones thatare trying to position
themselves as really leaders inthis age that we're in and to

(08:48):
continue to lead and to bringtheir entire organization
around.
Because I'm sure you're seeingthere are a lot of failed
projects, a lot of missedreturns, expectations that were
not met with AI in the pastcouple of years. Most of that is
a human issue or a leadershipissue. It's not a technical
issue. The tech is there. We aremissing this communication and

(09:10):
translation layer, And thatfalls on leadership in our
minds, which is what we'retrying address.

Chris (09:15):
It's really refreshing to hear that you're trying to
change the narrative. Because Iwould imagine, I know that
Daniel and I are constantlybombarded you know, with
different companies that are outthere telling us that, know,
they have the solution, it's AI,and it will solve everything
having to do with that. Andthat's so common. And so kind of

(09:37):
hearing it being grounded inleadership, I guess, kind of
going toward a question here, Iwould imagine it is very hard
for leaders or aspiring leadersto to kind of process the never
ending rapid change that'soccurring with this space.
Because, you know, over the overtime, having seen decades of

(10:01):
business and stuff, this is muchfaster cadence than it has ever
been.
We've seen change over thosetimes, but it's but now, you
know, literally, you know, everyweek, there's a collection of
new things to consider that arehitting you up, know, and
whatever you're at. So how doyou how do you deal with that, I
would imagine that they're thatyour leaders are coming in with

(10:22):
some level of anxiety, and somelevel of uncertainty of like,
you know, and maybe even somefear of making decisions that
are going to bite them on therear end by, you know, not not
very far down the road by makingchoices. Now, how do you get
through things like that? How doyou get through that kind of
fear and anxiety?

Allegra (10:41):
Yeah, I mean, you called out some of the main
challenges that we're seeing.And in through many
conversations with many leadersacross the board, we're hearing
the same thing. So one is thisnoise exhaustion and information
overload of trying to keep up.Another is fear of missing out
and getting left behind. So youeither end up in this position
where you're sort of paralyzedand you're not sure what move to

(11:02):
take.
And so you're getting leftbehind in a sense, or you fear
that you are, or you're movingreally quickly, you're making a
lot of decisions, but they'renot necessarily the right ones.
They're not grounded inanything. It's just based off of
this reaction to what you'reseeing around you. That could be
a very narrow echo chamber ofinformation that you're being
exposed to. Maybe you only checkTwitter for your updates, or you

(11:24):
only check one sort ofnewsletter to get your
information and you're notcreating this landscape of
multiple sources of informationto decide which move to make.
So yes, there is this stress,anxiety, and exhaustion that
we're seeing, which is also whatwe're trying to address. We do
that by not focusing on everylatest model, what the latest

(11:45):
type is. We're focusing on whatyour challenges are as a leader
in your organization. And thatis not gonna change every single
week, in theory, maybe it is forsome folks. But if you're a
mature organization and you havestrategic goals, probably you
know what they are or you shouldknow.
And then you can start toaddress what the technology is
that would help you achieve whatyour goals are related those

(12:07):
challenges. And so it doesn'tmatter if there are 10 new
models that came out last week.You don't need to know what all
of them are right now as asenior leader or as an
executive. You need tounderstand what numbers you're
trying to shift, what kind oftransformation you're trying to
move forward within yourorganization, within your

(12:27):
workforce, and then you can finditeratively the best solution
technically for that. We'retrying to help people build that
mindset and that scaffolding tounderstand the ecosystem.
We do have a section in ourprogram. We have it split up
into three foundations. Thefirst is on confidence, which I
can go back to in a second, butthe second is around action.

(12:50):
That's understanding risk andit's understanding the industry
and industry radar. It's aboutsetting your vision for AI, not
for your organization, but asyourself, what is your personal
stance as a leader?
What do you care about? Is itsecurity, privacy, transparency?
What are those principles thatreally resonate with you that
you can then use to make yourdecisions? Once you have an

(13:11):
understanding of how to evaluaterisk, once you understand what's
out there in a general sense ofcapabilities, not every single
minute detail, but a generalunderstanding, then you can
start to think about whatopportunities you have in front
of you as far as use cases. Wereally do focus on that rather
than trying to put a lot moreinformation in terms of

(13:32):
technicalities in front of you.
Then just going back to theconfidence portion, so we have
that as our first foundationbecause we want people to
develop this mindset as leadersof, Okay, I already have a lot
of strengths to move forwardwith. I've already built myself
up and my organization up.Understanding and knowing every
new model drop is not going tobe the differentiator here. It's

(13:54):
how I communicate with myworkforce, how I keep people
engaged, how I can manageeverything that's transforming
in front of us and keep peopleexcited to be here and to be a
part of what we're building,whatever that is. You need to
have that resilience as a leaderand that confidence in yourself
and to be informed before youcan start taking action, before

(14:15):
it even makes sense to startreading all of the latest news,
because it won't mean anythingto you unless you have that
personal understanding.

Sponsors (14:30):
Well, friends, when you're building and shipping AI
products at scale, there's oneconstant, complexity. Yes.
You're wrangling models, datapipelines, deployment
infrastructure, and then someonesays, let's turn this into a
business. Cue the chaos. That'swhere Shopify steps in whether
you're spinning up a storefrontfor your AI powered app or

(14:51):
launching a brand around thetools you built.
Shopify is the commerce platformtrusted by millions of
businesses and 10% of all USecommerce from names like
Mattel, Gymshark to foundersjust like you. With literally
hundreds of ready to usetemplates, powerful built in
marketing tools, and AI thatwrites product descriptions for

(15:11):
you, headlines, even polishesyour product photography.
Shopify doesn't just get youselling, it makes you look good
doing it. And we love it. We useit here at Changelog.
Check us outmerch.changelog.com. That's our
storefront, and it handles theheavy lifting too. Payments,
inventory, returns, shipping,even global logistics. It's like

(15:33):
having an ops team built intoyour stack to help you sell. So
if you're ready to sell, you areready for Shopify.
Sign up now for your $1 permonth trial and start selling
today atshopify.com/practicalai. Again,
that is shopify.com/practicalai.

Daniel (15:57):
Yeah. Allegra, it's really encouraging to hear your
perspective. I can I can, secondChris there? I Of course, when
we're working with customers,when we're talking to people on
the podcast, when we interact inour companies, the time hearing
like, Oh, this new model cameout and now OpenAI has open

(16:19):
models and I need to should Iswitch? And yeah, I think having
this sort of internal piece thateven if no one ever released
another model, you have morethan enough to be very
transformative in yourorganization.
Don't worry about it. There's along way to go there. Yeah. So I
think that that's reallyinteresting. I love also the

(16:42):
perspective on leadershipbecause one of the other things
that I think we're seeing alittle bit, and I would love to
get your perspective on this, iskind of the executives in a
company kind of dictating, like,we are now going to transform
with AI, right?
And everyone in the companyreally not understanding what

(17:06):
that means practically. Orleaders like, Oh, I'm a manager
in an engineering team and Iwant all of my developers to be
more efficient. So I dictate tothem, All of you need to be
using these AI tools. Andreally, no one ends up using

(17:27):
them. Everybody kind of has theworkflows that they're used to.
And so there's really not thatkind of trickle down
transformation that happens.Wondering about your perspective
on that. Maybe even for me as aleader of a team in my company,

(17:47):
I really want to understand thatelement better because I want to
both lead by example, but alsounderstand, to your point, how
to lead well my team forward ina way that is embracing the
right AI technology and beingtransformed. But I know that I
also can't just walk in one dayand be like, Everybody use more

(18:09):
AI, and then I go sit at mydesk. Even whether I'm using
more AI or not, it reallydoesn't matter.

Allegra (18:16):
Yeah. I mean, this is one of the critical mistakes
that we're seeing now. Therehave been some recent news
stories coming out of leadershaving to roll back their AI
first organizational approachbecause they weren't expecting
the backlash that they got. Theyassumed everybody was thinking
about AI the same way as theywere, which is not the case.

(18:37):
Everybody's coming to this froma different level of literacy,
from a different perspective.
Everybody has a pastrelationship with how they view
this technology as it relates tocomplexity, if it's actually
more useful or not. You can givepeople any tool you want. You
can give them a stipend, likefree money, go try whatever you
want. But unless you help themunderstand why or what it would

(19:00):
help them solve and really bringup that level of AI literacy
across your organization, itwon't make a difference because
people won't understand whatyou're trying to do here. Unless
you also communicate thatclearly as a leader, you're
going to come across as veryprescriptive.
I think, especially in theengineering space, we all know
that that's not ideal. We don'tlike when people are super

(19:20):
prescriptive and just tell uswhat to do. We like to explore
and to do research and to getthere on our own. What's
interesting about AI is thatthis is really coming from the
bottoms up in a lot of ways.Three times more employees
within organizations are usingAI than their leaders think.
That was from a recent reportthis year. So it's not that

(19:42):
people are not ready or can beengaged, but it's meeting them
where they are and having aconversation that's very honest.
So what are using AI for? Itshouldn't be stigmatized. If you
want to encourage usage, thenhelp people understand that it's
okay to share where they'reusing AI and why they chose to
do it that way.
Have open sessions where you'resharing with one another.

(20:04):
Establish this AI championculture and a fail forward
culture as well. You have toinvest time in experimentation
and research and know that it'snot all going to be perfect and
to make people feel like that'sokay and that they can try these
things and share openly. Becauseif you don't do that, then it
doesn't work. People won't useit if they're not part of it and

(20:25):
if they're not involved.
They already are using it.That's the thing. Most people
are using something to the sideof their work, whether or not
you put it in front of thempurposefully or not. Using
ChatGPT or Claude, or they'recoding with something on the
side, they have an AI drivenIDE. Something is happening in
the organization, whether or notyou built a program or

(20:48):
initiative around it.
So it's better to do that in avery open way and an honest way
where everybody is involved. Oneof my favorite things that I
worked on at Cloudflare waspiloting different assistants,
AI coding assistants. It was alarge group of engineers from
various teams and a lot of itwas qualitative. This was my
approach coming in, and I don'tknow if they liked it. I think

(21:11):
they did because the resultswere good in the end, but it's a
lot of just understanding whatpeople like and having a lot of
channels of communication for,Did you try this thing out?
How did it go? We're going giveyou a fully supported space and
time to invest in trying this.Then we're going do it with
something else. We're going tocompare them very honestly

(21:32):
because we're not just going tochoose a solution that seems the
best on the market right now.We're going choose the best
solution for you, for thisspecific group of engineers that
are part of this organization.
I think being quite humble as aleader in that sense too, that
you don't know the best thingfor everybody, you're at the
top, you don't have your handsin every single initiative, you
shouldn't anyways, that's mypoint of view. You have to trust

(21:53):
that the people that you hiredhave an opinion that's worth
hearing and then give them spaceto share that.

Chris (21:59):
You mentioned something just now that I was really
wanting to dive into. Youmentioned the word trust coming
in there, and that'scomplicated. And that there's
trust in multiple directions.There's there's not only the the
trust that the leader or leadersmust have in the teams that they
are overseeing, but there's alsothe trust of those being oversaw

(22:21):
overseen that, you know, thatare doing the work, the
engineers, and trusting in themotives of their leaders. And
that that raises some someinteresting things, several of
which you've you've mentioned.
You know, there's there's the asyou pointed out, there's this
reality that employees are usingAI in in areas that maybe they

(22:42):
have even been told explicitlynot to or or or at least they're
they're finding a place to kindof to kinda bring it in whether
it's noticed or not. And youalso have these top down, you
know, thou shalt go forward anduse AI, with employees worried
about, you know, what does thismean for my job, you know, job
security, is this AI eventuallygoing to replace me? How there's

(23:05):
so much involved in this andprobably more so than I've
observed in the past when we'vehad, you know, cloud computing
was a pre, you know, before AI,you know, the AI way, we had the
cloud computing wave. And beforethat, we've had other waves. And
there wasn't there, there was atrust in the technology and
privacy and stuff.
But there's now an implicittrust within your own

(23:27):
organization that exists, youknow, those factors. How do you
address that with, you know,with with leaders as you're
getting into and, get them ifthey don't recognize that
upfront? How do you get them torecognize and take action on
that kind of new dynamic that'snow in the workplace?

Allegra (23:44):
Yeah, trust is so critical here at every level, at
the technical level, at thehuman level across the board. I
think the way people noticethis, unfortunately, is when
things don't pan out or there'ssome sort of internal rebellion
as we're seeing with thisbacklash that I mentioned, or
they're not seeing, again, thereturns that they expected
because people didn't adopt inthe way that they anticipated.

(24:06):
It's because there wasn't thatrelationship building and that
trust because the people thatthey're trying to involve, the
workforce were not a part ofthose decisions. If you have a
vision as a leader and you wantto be AI first across
everything, you're notcommunicating that, you didn't
set any standards, you didn'tpublish any policies that help
people understand what's okayand what's not okay, then that

(24:29):
doesn't elicit trust in theenvironment. Again, it just
comes back to something thatfeels very top down without
involvement, which won't lead toany results.
Then there's trust in whatyou're actually building. This
is something that I also try toadvocate a lot for because right
now engineers are the onesreally pushing this forward in a

(24:50):
lot of organizations. There aregroups that are just trying
things out. Again, whether ornot it was dictated that it
should be done, that's just sortof what's happening. And so
what's being built might notnecessarily have trust built in
by default because that wasn'tsomething that was thought of at
first.
Maybe you're just trying tobuild something cool, you got
access to something, a new modelcame out and you just want to

(25:11):
throw something together. Thatcan sometimes escalate to being
now used by the company or someleader wants to see that in
production, even though itwasn't really tested thoroughly.
How can you expect somebodyinternally, if you're building
for, let's say, another internaluser to trust what you've built
if you don't communicate why itwas done or how, and you can't

(25:31):
really explain where the outputsare coming from and there's no
documentation around it. We'veabandoned the product approach
and thinking and anything arounddocumentation or thorough
testing when it comes to AI,it's just throwing things out
there and some things go intoproduction and are used and
sometimes they work, but a lotof times they don't. And so it's

(25:53):
hard to build trust when you'removing that way without a lot of
intention and without a lot ofclarity that you can express to
other people.
Something that we see failing alot, even when something works
really well technically and it'sperfectly executed, if you can
explain that to maybe somebodyin risk or compliance, it's not
going to get very far and theywon't be able to roll that out

(26:16):
and it won't be trusted, even ifyou, an individual that built
it, feels like it's good. Soagain, it's about this
transparency and opencommunication as you're going
and why you can't really abandondocumentation and you can't
abandon the reasons that youbuilt things or having
observability or logging. It'snot enough to just make

(26:37):
something that seems reallycool. You have to actually back
it up and to be able to explainit to the others around you.

Daniel (26:43):
Yeah, some of what you said there is definitely
applicable internallyexternally, but certainly a lot
internally in terms of thedocumentation, how reliable
something is, the testing, allof that sort of thing. Part of
what I was thinking in my mindwhile you were talking is
internally here, we like to talkabout certain ways in which we

(27:05):
would like to build things. Andone of those things that we talk
about is we would like to buildthings that kind of restore
trust in human institutionsrather than further erode that
via AI and automation. And I'mwondering from an external
standpoint, I mean, side of thisis internal, how you kind of

(27:27):
integrate AI features, testthem, deploy them, etcetera. It
kind of gets to another levelwhen, let's say, you're
releasing your voice assistantpublicly to the world, or you're
rolling this out to yourexternal customers and you say,

(27:48):
Hey, this is our new AI feature.
And that could go a lot ofdifferent ways, some of which,
like I say, could erode trustfurther with your customers.
Hopefully it's not already low,but maybe it could erode some of
that trust that you've built upover time, but maybe doesn't
have to be that way. What haveyou found to be some of those

(28:09):
kind of key principles thatleaders could keep in mind as
especially as they're releasingthings to their users or their
customers or to the public thatcan help the public understand
or their customers understandthat this has trust built in, I
think is how you phrased it.

Allegra (28:29):
Yeah. What's interesting is that the gap and
disparity between what's goingon internally with AI and what's
going on externally is so wide.The experiences are so different
from what people are buildingfor their own teams and then
what they end up putting inproduction. Maybe we can come
back to that, but it's justsomething I see very obviously
in the space. But I think onething that's super important

(28:50):
here is the user research.
Right now, everybody is puttingAI into their products
everywhere. If you have a bunchof vendors that you work with as
an enterprise, you'll see nowthat all of them are offering AI
and they're all offering thesame AI features. Maybe you
asked for them and maybe youdidn't. And so I think
understanding still your userbase, they might not need

(29:10):
something to change or the thingthat they do want to change, you
might not need to use AI for itin the way that you think. So
really asking and understandingyour users before you start
deploying that kind ofexperience.
Because then if they didn't askfor it and they didn't actually
need it, then why would theytrust it and why would they
start to be happy that it's outthere? Unless it makes the

(29:33):
experience so much better, but alot of times it doesn't because
this is still quite nascent fora lot of organizations. So
that's one thing. And then theother again is the transparency.
As a user, for example, you cango into financial services.
I think that's a reallyimportant industry and we think
about the financial services alot in terms of risk. But let's

(29:54):
say that I am now a user of somesort of FinTech app or something
around my finances and you'veput AI in there and I see
something in front of me that Idon't understand, and I ask, How
did you get to that decision? Oreven if I'm very technical, What
model did you use to get here?And you don't have an answer for
me, that will erode trust.That's something you need to

(30:16):
think about, especially aspeople are becoming more
literate, but also sometimes ata shallow level.
They can ask a question. Theymight not fully understand what
they're asking, but they mightask you something. And you need
to be able to respond to that.Again, the documentation, do you
have system cards in place? Doyou understand what your
guardrails are?
Are they documented somewhere?Do you have tracking? Do you

(30:40):
have system prompt versioning?Can you actually back up what
you've done so that whensomebody does ask you a question
and they're looking for thatconfidence in you, they're
looking for you to bring backthe trust, and it's an
opportunity for you. If youdon't have an answer in that
moment, you will erode trustwith your external users.
So I would say for leadersthinking about that, to be able

(31:04):
to ask those questions first andmake sure that they have
everything in order before theystart deploying. And then when
you do have something that isdriven by AI, being explicit
about it. If people areuncomfortable, maybe it's
because they don't know enoughand it's your job, your
responsibility as a leader inthat space to maybe educate them
within your product and theexperience around what you're

(31:24):
offering. So if they're off putby it, understanding why,
understanding your users, thathas not changed. But again,
somehow it's gotten lost, wheresuddenly we don't really care
what users ask for or what theirfeedback is.
I think we really need to goback to that.

Chris (31:39):
So I'm wondering, as we've been talking a lot about
adoption and the trust issuesthat go with that, one of the
things that I've been thinkingabout and curious on how you're
approaching it is how youposition the notion of
responsible AI to differentorganizations that you're

(32:01):
working with, as you're goingthrough this educational
process. And it's like therebecause there's not a single
golden path, you know, down thatdown that road. There are
there's a number oforganizations that have weighed
in with different types ofpolicy guidance and such on
that. Everything from governmentmultiple government

(32:21):
organizations to non governmentorganizations and and
nonprofits. And so as a companyis looking to try to make their
AI strategy work, and they'restarting to address these
various things we've beentalking about, how do you guide
them into framing that wholeeffort?

(32:42):
That they you know, because it'sa little bit different. There's
not just a go do this andthey're done. How do you
approach that?

Allegra (32:49):
Yeah. I mean, first, there's no standardized
definition of responsible AI.There are set of principles that
a lot of people agree on andthey're overlapped when you see
some of these policies that havebeen put out, but there isn't
something single across theindustry that everybody has
aligned on. That's one thing.The second is that a lot of
leaders don't care about beingresponsible.
That's just the honest truth.They care about their bottom

(33:09):
line and they care aboutfinancials and they want to see
numbers move and that's it. Andso a lot of the framing does
come back to that. And luckily,being more transparent, being
more accountable, having robustsystems, all of these lead to
better results and more money atthe end and better trust with
your customers. So luckily itlines up that way, but it is a

(33:32):
different story to tell.
I think leaders need to startunderstanding that having these
answers to your customers whenthey come asking or being
compliant and not facing majorfees when you are not compliant,
especially here in the EU, forexample, And being able to lay
it out in a very financial waythat makes sense is where to go
in here. And then again, we'reof course trying to shift the

(33:54):
mindset of leaders to understandwhat their own principles are
and what they actually careabout. And a lot of times,
organizations already havethese. They might already be
security first. They mightalready care about privacy.
So you can use that lens. So ifyou already care about privacy,
are you thinking about accessmanagement and secured access
when you're building your AIsystems? A lot of people are not

(34:16):
right now, that's a gap thatwe're seeing where they have all
of these governance structuresin place, but then they built an
AI system that completely erodesall of that and finds its way
around, and they didn't thinkabout that before. And so you
can use their own values andtheir own framing of how they're
running their business and tieit back to how to be more

(34:37):
responsible in practice. And weare seeing that this is changing
a bit in terms of how companiesare presenting themselves.
Going back to financialservices, for example, I was
looking through the top banksthat are leading in the AI space
right now. What we've seenmostly in the last couple of
years is a shift in what they'representing externally in terms

(35:00):
of explainability and theirresponsible practices and their
leadership. They have a lot morepeople talking externally about
how they're handling AI, sort ofgetting more insights into how
they're approaching. And so I dothink that the tides are
shifting because people arerealizing that you do need to at
least come off as if you careabout it a bit, and maybe along

(35:21):
the way you will actually startto care and make some
differences. So that's how Ithink about that.
But for myself, I also approachthis from the engineering side.
So because of my background andbecause I've built my entire
career alongside engineers, alot of what you want to do as a
good engineer to build goodproducts aligns with these

(35:42):
responsible practices as well.When you have testing in place,
when you do have security inplace and observability and you
understand what you've built andyou do, again, have these
options for versioning and youhave your MLOps figured out, you
will have a better outcome whenyou're building an AI system.
And all of those things alsobenefit on the responsible side

(36:03):
that then you can have otherteams looking into to understand
how you got to that point. Andagain, you're bringing in the
multidisciplinary trust that isso necessary for this.

Daniel (36:14):
As you're discussing things with leaders around
responsible practices, you know,how they should lead out with AI
strategy, that sort of thing.One, one of the questions that's
come up in my mind as, as you'vebeen talking is, is the
appropriate level of literacyaround these subjects on the

(36:35):
technical side that a leaderdoes want to have? Because I get
into so many discussions,because we're kind of
intersecting, my companyintersecting with the world of
security talking to CSOs or CIOsor whatever. And they'll say
things like, Yeah, we have ourown model. It's running

(36:56):
internally.
None of our data leaks. And yousort of probe into that a little
bit and you're like, No,actually what you just have is
an API key to a model endpointthat's not running in your
infrastructure and all of yourdata is living at rest in
someone else's infrastructure.There's just like such a wide
gap between what they apparentlythink they have and what they

(37:20):
actually have. And I understandus as AI people have probably
not helped that because we'vesort of obfuscated some of that
terminology and made thingsmaybe seem like they kind of are
what they aren't. And so I kindof feel sympathy for a lot of

(37:40):
people that we have made itextra hard for people to gain
this literacy maybe.
But, around things like, modelhosting, open, closed, fine
tuning, like all of these thingsare very confusing for people.
What is your recommendation asyou're going through this
material with leaders around theappropriate If there's leaders

(38:01):
listening in our audience, whereshould they be expected to get
to technical literacy wise to bean effective leader around AI
things?

Allegra (38:12):
Yeah. This is definitely a challenge even for
me because I'm in this everysingle day and I listen to these
terms all day long and I enjoyhearing about them. Yes. I'm
listening to these kinds ofpodcasts. So to distill down
what is actually critical forpeople to understand is even
hard for me because I'm like,Oh, I want you to know

(38:34):
everything that I know.
That's obviously possible. Thinkan easy way to approach this is
what you just called out, aswell as what's happening within
the walls of your organization.That's a really good place to
start. If you do have a systemthat is You have an API call to
some model, open or close orwhatever, who built that? Do you
understand what's going on inyour own infrastructure?

(38:56):
Who is designing this? Do youhave an architect? Do you have a
technical leader? Do you have alead engineer? Do have a CTO?
Somebody should be responsiblefor understanding what has been
built and why. If you don'tunderstand that and you don't
have a person and you don't havea relationship with that person,
that's your first problem. So Ithink starting there and
knowing, Okay, what do weactually have going on here

(39:16):
that's in production live rightnow? Let's walk through what
those terms are so I canunderstand now how to navigate
this space right now in front ofme, rather than every single
potential capability out therethat other organizations are
using, because that might not besuper helpful for you right now.
So I would start with that.
Another one is abstracting itand focusing more on what you're

(39:38):
trying to solve. Again, likewhat we talked about. So it
doesn't really matter if a 100new things come out, as you
mentioned, you probably alreadyhave the capabilities out there
to solve what you want to. Soasking good questions and
understanding like, Okay, I careabout security. That means I
don't want this to happen.
Is that happening? When I makethis kind of call, is the data
leaving? You don't need tounderstand everything in that

(40:01):
moment, but you do need tounderstand what kinds of
questions to ask, which is alsowhy the principles are so
helpful. Because if you careabout things like transparency
or you care about supportingopen source, we can go into that
as well. Like you mentioned,then you can focus on a specific
set of terms or concepts toreally understand.
But even before that, peopledon't understand what AI is.

(40:23):
They don't understand whattraditional ML is. A lot of
organizations are still runninglegacy ML systems and modern AI,
and then they started adding GenAI and they don't have an
understanding of what thoseconcepts even are. AgenTic is
thrown around a lot and thatdefinition varies a lot too. So
I think there are some that it'slike, if you are hearing it out
there, try to understand it abit.

(40:45):
But starting with what'sactually in front of you and
impacting your business, wouldsay is the most critical.

Chris (40:50):
I'd like to kind of follow-up on that a little bit.
I think that's great guidance.I've been thinking about it kind
of in my own space as you'vebeen talking through it. And,
you know, one of the thechallenges, even if they're
looking at even if the theperson who's receiving the
guidance is kinda looking intheir own walls, is there's
there's still we're moving sofast and you like like a genetic

(41:13):
is the word of 2025. You know, Imean, it's it, you know, it was
building up in the latter partof last year.
And it's full on now. And, andas people but that's moving so
fast right now. How do you getpeople to focus on that kind of
useful thing, even if they'regonna stay within their own
walls, and therefore they'rethey're kind of limiting the

(41:35):
scope of what they're addressingto your point earlier? How do
you get them to take it fromthat point of limiting scope to
the point of of, like, findingpoints of productivity that are
realistic and achievable, youknow, within reasonable levels
of time and resource on that?Because I I see all the time

(41:59):
people struggling with that, youknow, even within limited
scopes, when kind of figuringout how to make those choices
and make it real.
And that's you know, I've Ithink, like, me, I've looked at
Daniel as really, really good atthat, It it coming down and and
I think as you're out thereeducating the world, like how do
you get people when you don'thave a Daniel at a company? How

(42:21):
do you get them to be able tofocus on those different things
to to to focus their resourceson?

Allegra (42:27):
Yeah. So again, always back to business challenges. You
should be putting your resourceswhere you actually have areas
that you want to make adifference in and or investing
in research and exploration. Andin that sense, you don't need as
many barriers and you can have ateam. The ones that are leading
right now have already had besttalent in research for the last

(42:49):
years.
They're not just starting nowand trying to build things.
They've invested time inexploration and in failure and
in learning. I think that'sreally critical. So you have
that space. So again, it's not arush decision of suddenly I need
to understand this thing fullyright now because we've deployed
it already.
It's like we've made space andtime to explore a concept fully
and what it looks like when youbuild it out in practice, not

(43:12):
just theoretically. So I thinkthat's one thing. Another is the
user experience. I was justwalking through a wireframe with
somebody who doesn't have atechnical background at all, and
they were asked to build out anagentic workflow, multi agent
experience in financial servicesspace. And they were having a
really hard time with this, sothey called me and were asking
about it.

(43:33):
And as we walked through theexperience, I was like, You can
actually see very clearly whatyou do and you don't want. And
as you reach each point ofquestioning or experience, you
can address the concept that'sbeing used in that moment. And
it becomes a lot easier when youbreak it up that way and you can
relate it very tangibly to whatit's doing in practice. And what

(43:54):
we're seeing actually withagents, when you build out the
experience, people are like, Ohno, I don't want it to do that.
When it has this, then I want todo this specific thing.
It's like, Okay, what you'redescribing is automation and
deterministic outcomes. That isnot a fully autonomous multi
agent experience that you had inmind. And so you can actually
come to that quite quickly andpeople can understand it a lot

(44:15):
more simply when they see it infront of them in action. When
you just have a million words infront of you and you have no way
to know what that actually lookslike when it's deployed and in a
product, then it's not gonnamake sense to you. And I don't
know if that's an effective wayto approach it.
So I think having this userexperience thinking, again, like
the product mindset when you'regoing through this will help a

(44:35):
lot in grasping the technicalterms.

Daniel (44:38):
Yeah. I think every time I've actually asked people about
what they want with their AIagent, it turns out what they
want is either a rag chat bot oran automated workflow. I think
that's basically every time. I'msure there's other cases out
there. But, yeah, Allegra, Ifeel like we will definitely

(44:59):
need to have a follow-up to theshow to get more of your
insights and continued insightsfrom Lumiera.
As we kind of draw to a closehere, I do want to give you the
chance to kind of look forward alittle bit, towards the future.
We've obviously talked about alot of challenges in terms of

(45:19):
leadership and trust andimplementing responsible AI
practically. What, from yourperspective, gets you excited
about the future of howcompanies are adopting this
technology or the possibilitiesof how they might adopt this
technology?

Allegra (45:36):
Yeah. I think a really good one just happens to be our
company vision, which is afuture equipped for humanity. We
had started with humanityequipped for the future, but we
want to be human centered hereand actually shape technology
for what we care about and whatwe actually want to maintain
about the human experiencerather than having technology

(45:56):
shape the ecosystem and oursurroundings without us
involved. That's really where Isee the future going, is all of
us being a lot more activelyinvolved in the decisions we're
making around AI, whether you'rea leader or not. Then the other
is that responsible AI justbecomes what AI is.
It's the standard. You're notthinking about it as an add on

(46:17):
at the end or something thatfeels like a hindrance or a
barrier. It just is the standardwhen you're building. That's the
future that I hope for.

Daniel (46:26):
That's awesome. That's a great way to end. Yeah, like I
say, we'll definitely have tohave you back because I feel
like we could have talked for afew more hours. But thank you
for the work that you're doing.Thank you for the way that
you're helping leaders in thisspace.
And, we'll we'll definitelyprovide links in the show notes
to, what what Allegra's workingon and some some other talks. So

(46:50):
make sure you check that out.And, yeah, talk to you again,
soon, Allegra. It was great.

Allegra (46:55):
Thank you so much. All

Jerod (47:03):
right. That's our show for this week. If you haven't
checked out our website, head topracticalai.fm, and be sure to
connect with us on LinkedIn, X,or Blue Sky. You'll see us
posting insights related to thelatest AI developments, and we
would love for you to join theconversation. Thanks to our
partner Prediction Guard forproviding operational support
for the show.
Check them out atpredictionguard.com. Also,

(47:26):
thanks to Breakmaster Cylinderfor the Beats, and to you for
listening. That's all for now,but you'll hear from us again
next week.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.