All Episodes

October 30, 2025 31 mins

Agentic AI is pushing leaders to rethink roles, processes, and governance far beyond another automation wave.

In this episode, Andreas Welsch speaks with Danielle Gifford, PwC Managing Director of AI, about how organizations should prepare for agentic AI. Danielle draws on frontline experience with enterprise pilots and deployments to explain why agents require new infrastructure, clearer role boundaries, and fresh approaches to governance and workforce design.

Highlights from the conversation:

  • Why agents are different from classic rule-based automation: they’re goal-driven, context-aware and can act with autonomy, which creates both opportunity and risk.
  • Where companies (especially in Canada) are on the adoption curve: pilots and POCs are increasing, but full-scale deployments need better data, guardrails, and change planning.
  • How leaders should approach agent projects: start with the business problem, map processes, and decide where human + agent collaboration delivers the highest value.
  • Workforce design and the “digital coworker”: practical advice on defining role boundaries, delegation rules, and how to evaluate outcomes when humans and agents collaborate.
  • Multi-agent orchestration and governance: how to prevent agents from converging on weak solutions and how to build review, control, and accountability into agent systems.

Key takeaways:

  1. Business first: define the problem before choosing technology. Agents aren’t a silver bullet — they must solve a real, scoped pain point.
  2. Move from experimentation to implementation: Canadian enterprises are ready to progress beyond proofs of concept and invest in production-ready agent solutions with proper controls.
  3. Agents ≠ automation: treat agents as goal-based collaborators that need explicit boundaries, evaluation metrics, and workforce redesign.


If you lead teams, product strategy, or AI initiatives and want practical guidance for turning agent hype into measurable outcomes, this episode is for you. Listen now to get the full conversation and actionable next steps.

Questions or suggestions? Send me a Text Message.

Support the show

***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


Level up your AI Leadership game with the AI Leadership Handbook:
https://www.aileadershiphandbook.com

More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Andreas Welsch (00:30):
Today, we'll talk about how to evolve your
leadership with Agentic AI andwho better to talk about it than
someone who's actually activelyworking on that.
Danielle Gifford.
Hey, Danielle.
Thank you so much for joining.

Danielle Gifford (00:41):
Awesome.
Thank you so much for having mehere this morning.

Andreas Welsch (00:45):
Great.
Why don't you tell us a littlebit about yourself, who you are
and what you do.

Danielle Gifford (00:49):
Okay, sounds good.
So my name is Danielle Gifford.
I am currently a ManagingDirector at PwC of AI.
And I would say my role reallyfocuses on two different things.
One is our solutions and ourproducts and what we're actually
offering in markets.
So everything from strategy togovernance, to literacy, to
actually building and deployingmodels from traditional AI still

(01:12):
feels funny to say traditionalAI to generative AI to agents.
And then the other part of it isactually co-leading our emerging
solutions.
So as I'm sure you're well awareof the way that we do work,
there's just a better way forcloud migrations, application
migrations, data migrationstheir traditional kind of line

(01:32):
by line code transformation.
There's tools that can supportus in that.
And so those are the two hatsthat I wear at pwc.
I also teach on the side at theUniversity of Calgary, so I am a
professor in their MBA courseteaching an applied AI and
business course.
So helping business studentsactually understand what is and
what isn't AI, how to criticallyevaluate it, and then how to

(01:55):
actually look at scoping a usecase.

Andreas Welsch (01:58):
That's awesome.
And we talked a little bit aboutthat before going live, so let's
make sure we talk about thathere on the air as well, because
I think it's incrediblyimportant to also prepare the
next generation of leaders to beprepared for this
transformational time that we'rein and have an idea for what's,
ahead and how can we lead thatsuccessfully.

(02:19):
Yeah, absolutely.
Yeah, so awesome folks.
If, you're just joining thestream, drop a comment in the
chat where you're joining usfrom.
I'm always curious to see howglobal our audience is.
And also don't forget to getyour copy of the AI Leadership
Handbook so you can learn how toturn technology hype into
business outcomes.

(02:40):
And yeah, what do you say?
Should we play a little game tokick things off in good fashion?

Danielle Gifford (02:46):
Yes.
I love a little game.

Andreas Welsch (02:49):
All right, so you'll you'll see the sentence
here when I hit the buzzer.
You'll also see the surpriseword, and I would love for you
to answer with the first thingthat comes to mind.
You'll have 60 seconds for youranswer.
And for those of you watching uslive, drop your answer in the
chat and why as well.
Are you ready for, What's theBUZZ?

Danielle Gifford (03:08):
Yeah, I'm a little nervous and anxious, but
I'm ready.

Andreas Welsch (03:13):
I'm sure you'll do just fine.
So let's do this.
If AI were a, let's see, if itwere a color, what would it be?
60 seconds on the clock.
Go.

Danielle Gifford (03:33):
Okay.
Interesting.
There's green, there's theprimary colors.
I feel like my mind immediatelygoes to yellow just because that
is my favorite color.
The reason I would say yellow isit's energetic.
It brings light to things Iwould say, and being able to
actually search and find andsummarize, and it has most of

(03:56):
the time, like a very goodoutcome.
But then the other thing withyellow, and if you can even
think about in stoplights, isit's a little bit of a sign of
caution.
And although it has this likeenthusiasm and lightness and
energy, there's also a bit oflike caution towards how you're
using it, when you're using itand what the actual application
is.

(04:16):
So final answer is yellow.

Andreas Welsch (04:19):
That's awesome.
I, love it.
I was a little concerned youmight say.
It's the association.
But let's hope we we don't getto that part.
So really good to, to see the,positivity and, the optimism and
the sun is yellow, so it givesus some warmth and energy like
you said, so awesome.

(04:43):
I'm trying to figure out how tomake a good transition to our
topics.
Because it seems a little abruptwith all that energy that that
we see in the market and allthat yellowness.
What do you see from where youare in you're working in Canada,

(05:03):
you're working with a lot oflarge companies, brands across
different industries.
You have a front row seat atwhat's happening, what people
are actually doing.
What are you seeing is the stateof AI hype and adoption, and
where are companies on theirpath?

Danielle Gifford (05:18):
Yeah, absolutely.
I feel like it's a really goodquestion in terms of what are we
hearing like the actual, likenoise that's in the system
versus what's actuallyhappening.
I would say from a practicalperspective, what we're seeing
in Canada is we're actuallyseeing companies start to really
push forward into the adoptionof not only just AI and

(05:39):
generative AI but also agents.
Which is something that's reallyexciting.
I'm sure, as or for anyonethat's listening Canadians are
good, they're kind, they'rehumble, but they're often a
little bit risk aware.
And so when it comes to theadoption of new technologies,
we're not always the first tohop on the bandwagon.
And so what I'm starting to seein companies is like not just

(06:02):
the original approach of let'smake sure that we have our data.
Properly in the right systemsand its cleanse and we can do
the analytics on it.
That's obviously one key part ofthe conversation, but it's
actually where are theopportunities for us to leverage
and look at AgTech or generativeAI within our offices, both from
a back office perspective andfrom a front office perspective.

(06:24):
And so I would say this is thefirst time that I've started to
see that real shift, especiallyhaving been in the AI space, I
would say for the last six toseven years.
We had the hype of AI and thenwe went through the AI winter
and then ChatGPT launched on thescene in November of 2022.
And now all of a sudden it's notjust something that's a buzzword
or people are looking at it andjust being like, oh, I'll get to

(06:48):
it.
It's actually something that'sserious that's on the agenda
that executives have put budgettowards.
And they're putting theguardrails around it to actually
start to pilot and then moveinto deployment with the certain
opportunities that make the mostsense for their organizations.
Very long-winded way to say,from a Canadian perspective,
we're starting to see a lot ofmovement from not just the top

(07:11):
level, but throughout theorganization.
And then the actual adoption ofit into some of the systems and
processes.

Andreas Welsch (07:17):
So that makes me curious.
First of all, I fully empathizewith risk aware, coming from
Germany.
Yeah.
There's a lot of times even riskaverse, the next step up on that
scale.
From my perception, generativeAI has become a lot more mature,
whether you read the reports.
Yeah.
And it's the trough ofdisillusionment or the

(07:41):
realization that things aren'tas easy, aren't as simple as we
initially thought.
And we've gone through thisbefore in other hype cycles, it
seems that Agentic AI is, notjust, coming up that, that slope
and organizations are trying tofigure out what do we do with
this?
How are you seeing this play outin Canada?

(08:03):
Are there proofs of conceptbeing spun up?
Is it piloting, is it puttingthings into production specific
areas of business where you seecompanies are looking at this
more seriously than others?

Danielle Gifford (08:15):
Yeah.
One of the things I would say onthat question, or even on that
topic is you mentioned likegenerative AI is a little bit
more commonplace now, and Iremember reading a report from
Gartner said that like of May of2025 about it's a thousand
different vendors or platformslike the Workdays, the
Salesforce, the SAPs, et cetera,have introduced some form of

(08:37):
generative AI into theirproducts.
And so that means that you knowit's gonna be here regardless of
if you.
Not if you want it or not, butlike the applications are
getting turned on in the systemsthat you're using every day.
Now we're starting to see thesame sort of trend I would see
with agents where a lot ofcompanies and platforms are
starting to look at building outthe correct like systems and

(09:00):
infrastructure to actuallysupport agents.
One of the unique things, and Ithink we'll get to this a little
bit later, is everyone thinksthat it's just.
Easy to drop in an agent.
They're like, oh, I have anagent.
I'm gonna give it a goal, andthen it's gonna get me to that
goal in simple form, and I'mgonna save money.
I'm gonna save time, and I'mgonna save manpower.
But it's not as easy as itseems, right?

(09:22):
Like it's not that simple.
And so when you're thinkingabout agents very much like you
think of self-driving cars, youstill need the infrastructure
and the rules and the logic andthe guardrails around it.
If we think about Waymo, the waythat it works today, right?
Like you have all of the roadsand the streets, like you have

(09:42):
the certain logic aroundactually stop signs or you have
stop lights in terms of what todo.
So you know and understandwhat's there.
But with agents, you still haveto set up some of that
infrastructure inside in orderfor them to be effective, in
order for them to work well andin order for you to actually
have the right guardrails,controls and processes around

(10:03):
them to be effective actuallywithin your systems.
And I know that Microsoft hasstarted to see.
And like they've posted this,some early gains within their
business, specifically withincopilot.
I think that they were sayingthat agents have supported such
as like 9.4, 9.5% for higherrevenue per seller, and the
ability to actually customizesome of the products and what

(10:25):
they're actually doing.
And so we are starting to seesome of that from an Ag agent
side.
I will say, at least from aCanadian perspective.
We do see companies that arepushing towards agents and that
are wanting to do proof ofconcepts and pilot them, but we
haven't seen as much in terms ofthat actual full deployment.
Unless you're looking at acompany like Cohere, which is a

(10:48):
really famous company here inCanada that actually builds
foundation models.
Enterprises, both public andprivate are playing around with
it, but they're still getting tothe.
Crossing the chasm, I would say,of actually bringing them into
deployment.
So it's good momentum and it'sgood trajectory, but it's
actually just like when you makethat leap to actually seeing the

(11:09):
impact in your systems.

Andreas Welsch (11:12):
It seems like that makes perfect sense.
Given where the industry is,where the technology is, what
you mentioned, building theguardrails, building the
infrastructure around it in yourorganization, maybe between
organizations and it seems likethat we're really at this early
moment where we're seeing thispotential and we're able to
prove it in certain scenarios,but there's so much more to be

(11:34):
done to really capitalize on it.
There, I'm wondering.
How, do you see leaders viewingthis topic?
What's the best, or what's theright way to think about it
knowing that there is so muchheight?
There is so much push, butthere's also so much more to do.
Yeah.

Danielle Gifford (11:49):
I don't know if there ever is like a right or
wrong way to think about agents,to think about technology, but I
always do think that it goesback to what is the problem that
you are trying to solve for?
And similar to what we weretalking about before, leaders
think that not all leaders, butleaders that maybe aren't as
technical think that it's assimple as just dropping an agent

(12:11):
in and then it will do what youneed to do.
But really what it means forbusinesses is that you need to
actually take a look at likeyour processes the people that
you have, the systems thatyou're working with, and almost
do process mapping to understandwhat are all of the intricacies
within that, where are the kindof high value areas where an
agent.

(12:31):
A single agent or either like anorchestration of agents working
together could provide the mostimpact.
That's where I'm seeing kind ofleaders see a little bit of a
miss from like the, actual likehype versus like where the,
power of agents can be.
The other thing too, which isunique, and I'm sure that you've
heard it a lot and I'd love toget your take on it it's this

(12:53):
whole notion of now we're gonnahave digital coworkers, and so
what does that actually mean?
And so that's a lot.
Even if we think about workforcetransformation, if we think
about.
Learning and development, if wethink about human resources.
And so when you're working sideand side with an agent, what
does that actually mean for youand how do you have that kind of
collaboration or thatcooperation together between

(13:18):
what is a, human doing and thenwhat's an agent doing and where
do you actually come togetherwhere it's human plus agent.
And so I'm curious actually, ifyou've had conversations around
that digital coworker and thenhow do you actually manage that?

Andreas Welsch (13:32):
So I, great point.
I actually created two courseswith LinkedIn learning on that
topic.
Perfect.
Of how do you bring AI into yourorganization when it, becomes a
coworker.
So a little plug here to andrecommendation follow you to
take a look at those.
But I think, we see a lot oftimes people and, vendors

(13:55):
comparing AI to humans.
I've seen this term of what wasit, AI employee, come up lately.
And to me, that's such amisnomer and a mislabeling and
miscategorizing of what thesetools are at the end of the day.
They are tools.
They have access to information.
They're built into a softwareyou use every day, but they're

(14:17):
not a replacement or notequivalent to your human
colleague.
That's it.
The next stake desk over in, in,in the office across from you or
something.
So I think there we need to becareful how we refer to them.
In general to raise the rightexpectations.
On the other hand though, andthat's where I'm a little
conflicted myself, is if youlook at how work is done in a

(14:39):
business.
We see many parallels that wecan now apply to agents as well.
Here, there are guidelines,there are codes of conducts.
There's stored operatingprocedures things like that.
How do we divide work and whatcouldn't we really apply to
agents?
And I think especially the partof the collaboration, when do

(14:59):
you hand something over?
What do you ask?
What information do you need togive is so important.
As a leader yourself I'm sureyou've gone through some kind of
leadership training.
I was fortunate to do thatearlier in, in my career.
And there's some basics abouthow do we delegate tasks to
another person that we want towork with, or that's part of our

(15:20):
team.
It's usually about what is thegoal that I want you to achieve?
What is the context in which youwork?
Are there tools?
Is there additional data to workwith other people or other
resources to use?
And.
The last one, which is the mostimportant one for me, is how do
you evaluate if the outcome isactually good?
A lot of times you say, yeah,that isn't good enough.

(15:43):
Or if, you're an employee andyou manage this, that isn't good
enough.
You say why is it not goodenough?
Oh I don't know, but do itagain.
I feel we're seeing a lot ofthat now with AI, with us as
individual users, almostbecoming leaders.
So we need to be clear of thosethings.
But it's really this division oflabor to come back to that point
to say, where can I delegatesomething safely?

(16:03):
And I know that I won't becompromising quality.

Danielle Gifford (16:05):
Absolutely.

Andreas Welsch (16:06):
And I won't be compromising accountability that
is still emerging.

Danielle Gifford (16:11):
I think too, on that point, it almost goes
back to the basics of workforcedesign.
And so what are the roleboundaries, like you were
saying, what are the objectives?
What are the tasks?
And if we are gonna have thesehybrid human and AI teams, what
does that really mean?
And I would say, and I'll go outon a little bit of a ledge here,
but a lot of companies don'thave great processes in place.

(16:34):
Or when they have rules like.
Everything that you named interms of context.
Structure, like what tools, whatare the boundaries, how are you
evaluated?
What's not that cut and clear ofwhat that actually is.
And so I think what agents aredoing are really actually
forcing.
Teams and leaderships to focuson what, are those boundaries

(16:54):
and what does that look like?
And so as companies andorganizations start to implement
agents or look at ag agentic AI,whether it be a solo agent or an
orchestration, I think thatthey'll actually see a lot of
business redesign.
And that will allow them to havea lot more value in what they're
doing because the way thatthings.
Our done today isn't alwaysnecessarily the best way, but no

(17:16):
one can point to what it is orhow it's evaluated or who does
what.
It's always kinda living inpeople's heads, and so how do
you actually get thatinformation out that's in
people's heads or in systems tosupport those hybrid workers.

Andreas Welsch (17:32):
I think that's an excellent point because a lot
of these process, a lot of thesesystems, especially if they're
the ones that your business runson.
Have been implemented have beendesigned 20, 25 years ago, maybe
15 if you're a little moremodern or your first move to the
cloud or something like that, ifyou are more on, the leading
edge.
But it's not the case that youcan reconfigure these things so

(17:54):
easily.
And, one thing that comes tomind is we, talk a lot about
and, just know as well workforcedesign.
Depending on the size of thecompany and the industry that
you're in, you might be goingthrough this process every 18 to
24 months.
Reorganization change, the worldhas changed outside.
We need to change too.
We need to adapt and I'm notseeing a lot of conversation

(18:16):
about how do your agents need toadapt.
Yes, there are still finance andfinance tasks and it's still HR
tasks, but once you get moretowards go to market, you get
more towards sales, you get moretowards the consulting side.
How are things going to changein the business on the customer
facing side?
And how will you be able toadapt that?
I think that's probably the nextfrontier that we'll talk about

(18:37):
soon once agents have become thenorm in business.

Danielle Gifford (18:41):
Yeah, absolutely.

Andreas Welsch (18:45):
Which also reminds me.
A couple months ago I was havinga conversation with a former
colleague of mine.
And we were talking about thisconcept of agents.
And I said, Hey, I thinkbusiness is really changing
fundamentally, and leadership ischanging and leaders need to
work differently with theirteams.
And my former colleague said,no, I don't think so.
It's just another kind ofautomation.
You're just automating aprocess.
You used to do it with rules,used to do it with robotic

(19:08):
process automation, maybe alittle bit of machine learning.
Now you do it with agents.
What's the big deal?
I'm wondering where do you standon that?

Danielle Gifford (19:16):
Yeah, it's an interesting conversation and I
can see the different points ofit, especially if you are a
technologist and like my fiance,like it's been in software
engineering for a number ofyears and now is like working in
another company where they'releveraging generative AI and
agents for cyber security.
And so it's at a very simplelevel.
Maybe it is automation, but ifwe, go back to go what is

(19:40):
automation?
It's rule-based.
It's logic.
It's a predefined script.
So if X happens, do y if thetemperature drops.
In Calgary, it's very cold.
So if the temperature dropsbelow X degrees, like turn on
the heat, right?
Whereas agents are different,right?
They're goal-based, they havecontext, they have objects that

(20:00):
they interact with, they haveboundaries.
And just because if X happens,do Y, just because they come up
against a hurdle.
Doesn't mean that they can'tactually go around that.
And so that's where theintelligence comes into play as
an agent.
And like very similar to you,what in reasoning models to an
extent is thinking through whatis the best way to solve this

(20:21):
problem?
And what information do Iactually need and what do I have
access to?
And what is the what is my levelof autonomy or authority to
actually act on that action?
And so that's where I would saythere is that big discrepancy
between what.
Automation is, and then whatagents are, is you have that
flexibility and then you havealmost not a lack of

(20:43):
supervision, but for models thatare a little bit more advanced,
you don't have that samesupervision like you would over
automation, where you know, ifthe script breaks you figure out
a way to fix it, right?
There's a very different confineand kind of set of instructions.
And boundaries and guardrailsand access that you would give

(21:21):
an agent versus automation.
And that's where I think thekind of difference is.
And if you think of that's justlike one agent.
Now, what if you actually paireda couple agents together and how
do they work together?
And if they come up with anidea, how do they critically
evaluate it and not just allagree that's the best idea?
And so there's different layerlayers of intelligence to this

(21:44):
that make it so much more thanjust automation in my mind.

Andreas Welsch (21:49):
I love that.
So the two points that I think Iheard.
Where there's more autonomy inthe decision making in addition
to the level of automation thatyou have.
And then on the other side, whenit's about multi-agent, how do
you make sure you really get thebest outcome and, the best
output if you pair them and havethem review and and critique

(22:12):
each other's results?

Danielle Gifford (22:13):
Yeah, and I know that, there's been levels
or examples of agents withinlike scientific research where
you have someone that goes outand does the research.
You have someone that acts aslike the pure reviewer.
You have someone that you knowsorts for something else, all
these different types of agents,but how do you put in the right
boundaries and confines so theydon't all just agree on the

(22:35):
first answer that is given tothem.
What does that look like?
And so to me that level ofcomplexity really puts it on
another.
Level of like how you need to bethinking about them and how you
might actually need to begoverning them.
And not every company, or notevery area is at that specific
state.
But as we make this evolution,as we are saying like generative

(22:58):
AI is now more commonplaceclassic prediction models are
now traditional AI, which ishilarious.
And as it becomes morecommonplace and people get more
comfortable with them, what doesthat actually look like?
And I still think we're veryearly days of understanding how
to set that up and have it inthe right confines.

Andreas Welsch (23:18):
So agents agreeing with each other, sounds
like we're converging towardsthe mean, like we're doing in so
many other areas.
When we use generative AI, whenmany people use gene AI.
And.
I'm curious how, so youmentioned you're a professor at
the university.
You teach MBA students.

(23:39):
How, do you balance that andwhat do you teach students?
What do they need to know,especially as they're going for
the MBA as, as they move intoleadership roles and higher
levels of accountability inleadership in general?

Danielle Gifford (23:52):
Yeah.
I would say with that there's somuch that you can cover when it
comes to AI.
But one of the big things thatwe try and teach in the class
and try and focus on is likewhat is and what isn't AI.
What is the problem that you'retrying to solve, first and
foremost, and then how do youactually critically evaluate the

(24:13):
different models or tools thatare out there?
You have models like Notebook,lm, you have Synthesia.
You have Rep and so how are youactually looking at what was the
model trained on?
Who are the main creators ofthis?
What is the actual price forthis?
Where does your data go?
What level of access or controldo you have over that?

(24:34):
It's almost like the criticalthinking around how do you use
and leverage AI and then how doyou think about the guardrails
that need to be around it from abusiness perspective, I think
traditionally in businessschools.
There really was always just afocus on theory, which is great.
I think theory is important fora lot of things, but what I'm
seeing now, more than ever, andthis is myself included, so

(24:57):
maybe I am biased in this, butpeople wanna be hands on.
They wanna learn, and we weretalking where it's students
today.
They already know and understandprompt engineering.
So they're like, teach me how tobuild an agent, or show me how I
can vibe code something onlovable or cursor.
And I think they're reallysurprised, especially as people
that are non-technical or don'thave that technical training

(25:19):
just at what they can actuallybuild in a short amount of time.
And I remember even in mystartup days, when we are
building a website, like ifyou're using wic Wix or if you
even have to outsource and finda developer, you're like do I
get a full stack developer?
Do I get a friend end?
Do I get a backend?
What does that look like?
And I think I want it to looklike this.

(25:40):
And it's like weeks and liketens of thousands of dollars.
And then you might not alwaysget what you want, whereas now
you have these tools.
And so as business studentsthink about not just in their
corporate lives, but as theyalso think about
entrepreneurship orentrepreneurship.
What does that mean to them?

Andreas Welsch (25:58):
I think that's a really great way to, to frame
it, make it hands-on, go beyondprompt engineering and most of
all, teach the critical thinkingand that's really what matters
When we, have so much choice.
That's, one of the things for mejust because he can generate a
lot more information doesn'tmean that the, quality is

(26:19):
better.
And you end up with decisionfatigue and need to figure out
which of these 10, 12 30different versions do I really
and so but more on the criticalthinking skills.
What is real, what makes sense,what is logical, what is
plausible?
And how do you.
How do you check some of thefacts?

(26:41):
How do you, do that too?
So great to hear how, you'reapproaching that.

Danielle Gifford (26:44):
Yeah.
I would say too, just before weleave that, one of the things,
and I'm sure you're seeing thisin your conversations, or at
least with some of your clients,is like there now is starting to
be a little bit more signal,even, at least from the Canadian
government.
Around like what rules andregulations are gonna come for
corporations when they'rethinking about leveraging AI.

(27:05):
And so if we go back to the EUAI Act there's literacy in place
for anyone that is accessing,developing, using, or deploying
AI.
And so these are the types ofthings that are also gonna start
to become commonplace, at leastin, you know, Canadian
companies.
And like in Canadian society, aswe have a new minister of
artificial intelligence, EvanSolomon, which is fantastic.

(27:28):
And there are these things thatare coming into place that
business students, or evenanyone in general, needs to see
these signals and understand howthey're gonna impact businesses
down the line.
And so when you are building,making sure that you have, and
we say the word guardrails, Ifeel like it's so overused, but
making sure that you have theright guardrails and controls

(27:48):
and processes in place tosupport the technology that is
in fact solving a problem.

Andreas Welsch (27:56):
To me, that sounds like a very responsible
approach to take.
It's not just the survival ofthe fittest.
If you're cutting edge andleading edge, great, we we want
to work with you.
If you need a little moreguidance or you're in different
sectors where it's not all abouttech and software then, sorry,
you're on your own.
I think it makes sense to takepeople along on, on the journey
so that everybody in the,country as a whole and citizens

(28:19):
as a whole benefit, so we'vetouched on a lot of different
topics, starting with the hypewhere adoption is, you said
companies are looking at thisgenerative AI, a little more
established agent AI.
We're getting there.
Leaders should be aware ofwhat's happening and, what's

(28:39):
real, what's not.
We need to teach our MBAstudents.
So we've covered a lot ofground, but I'm curious in your
own words, what would you sayare the key three takeaways from
our show today for our audience?

Danielle Gifford (28:51):
Oh, key three takeaways.
That's a good question.
I would say one focus onbusiness problems over
technology.
So business first, technologysecond.
Any type of AI or any type oftechnology like very much
applies to, because sometimeswhen you're looking at a problem

(29:12):
and it's almost like a flower,you realize that the petal isn't
the problem.
It might be the root.
The second thing that I wouldsay is.
For Canadian enterprises and forCanadian leaders.
Not just like playing around andlike piloting really actually
starting to take the next stepsof actually implementing

(29:33):
different forms of AI, whetherit be traditional AI, generative
AI or agents into theirbusinesses.
I know that we're risk aware andI know that we have, different
rules and obligations andprocesses that we like to go go
against, which is important andbe in compliant.
But if we think aboutcompetition and if we think
about how we can build a bettercommunity and a safer business

(29:56):
for all, that's something that'sreally important.
And then the final part of itis.
AI is not just automation.
Agents actually have autonomy.
They have goals, they havepurpose.
They're gonna work in certaincon confines.
They're gonna have differentactions that they're taking.
And so it's really importantthat when you're looking at

(30:17):
problems and you're figuring outwhat technology to actually use
to solve or get 60, 70% of theway in the solution for a
problem that you understand whattype of technology and how it
should actually be built anddeployed.

Andreas Welsch (30:32):
Awesome.
Wonderful.
Thank you so much forsummarizing what you as
listeners, as viewers shouldkeep in mind.
Danielle it's been great havingyou on the show.
Thank you so much for joining usand for sharing your experience
with us.

Danielle Gifford (30:48):
Thank you so much for having me.
I'm looking forward to readingthe book and yeah, it's been a
pleasure.
Advertise With Us

Popular Podcasts

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.