Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:02):
The Institute of Internal Auditors presents all
things internal audit tech.
In this episode, Warren St.
Stepch speaks with Ethan Ani about the evolving role
of ag agentic artificial intelligence.
In internal auditing, they discuss
how ag agentic AI differs from traditional ai,
its impact on risk assessment,
and the skills internal auditors will need to adapt.
(00:22):
They talk about the ethical considerations, automation
of controls testing, and the future
of AI driven audit execution.
Hey, so let's dive in here, Ethan.
Uh, glad to see you today
and, uh, we'll have some fun with this conversation.
So, out of the gate, uh, I'd like
to have a little discussion with you, Ethan,
(00:43):
and, and some perspective.
Uh, can you give, uh, an explanation of
what AG agentic AI is
and how it differs from traditional AI systems?
It's a great question, Warren.
You know, it's a big topic in industry right now,
and, you know, in short, ag AI is kind of
what we've all been thinking AI would be,
you know, for many, many years.
(01:04):
It's, it's that proactive artificial intelligence that has
that ability to act.
I mean, you know, let me give you my favorite analogy
and then I'll go through some of the differences.
You know, my favorite analogy is traditional AI is like
a, a good librarian.
You know, you ask a question and it goes,
and it finds you the answer.
Ag AI is, is more like an investigative journalist,
where it goes and seeks out information, it finds leads,
(01:27):
it makes decisions, and it comes back
and it gives you that information that, you know,
you could basically say, Hey, go, go tell me what's going on
with, uh, grant Thornton these days.
And, um, you know, it'll come
and it'll write a full article for you based on that,
rather than just kind of giving you an, an, you know,
an answer and it'll make some
decisions on your behalf as well.
I think that's really something that, um, differentiates it.
(01:47):
But in looking at the differences between traditional AI
and agentic ai, you know, there's,
there's several categories of differentiation.
Uh, one of them is reactivity,
traditional AI tools like Chat, GPT
or Claude, which is, um, you know, a tool
that you can get on Amazon, which is, you know, part of, um,
philanthropics system.
They're very reactive, right?
(02:09):
You, you ask a question, it comes back with an answer,
and it could be a very good answer,
or it might not be totally correct.
It'll be, you know, like a human, it'll make mistakes,
but it's not very proactive versus Agent ai.
On the other hand, they're very proactive, right?
It might wake up in the morning
and say, Hey, Ethan, I know that you were looking at, um,
you know, what's going on with Grant Thornton yesterday.
I just want you to know there's a news update
(02:30):
that I think you would find very interesting.
Can I, can I give that to you today?
Because I know that
that's something you were thinking about.
Or, Hey, Warren, I know that you're really interested,
you know, in audit quality, I see that there's a new update
for the audit standards board,
or the i's updated its policies.
Can we do a quick summary of those?
So, agen AI is very proactive, um,
just like a human being might be,
(02:50):
and the other one is autonomy,
and this is where it gets a little bit interesting, right?
So, you know, again, traditional ai like chat, GBT,
they're kind of, they're really limited to the questions
that you pose to it versus, uh, some
of the new agent ais that are coming out.
They have the ability to make decisions
and actions on their own.
It really gets a little bit scary.
And, and so we talk about the risks associated with that.
(03:12):
'cause there should always be a human
in the loop at this point.
Um, but, you know, we're starting to see, uh,
AI checking ai, for example.
We call those multi-agent systems.
You know, one of the areas where I think
that there's a big difference, which really kind
of gets me excited
and interesting, is that there's a goal orientation
for agent ais.
(03:32):
And this is something that I find to be super interesting.
You can almost give the AI its own persona, um,
and say, you know, your goal is to be the best, um,
senior associate in internal audit, uh, possible.
And you're going to review the work of some
of the associates, and we want you to understand how
to lead walkthroughs, how to create flowcharts, how to,
you know, do assessments.
(03:53):
And that AI is never gonna give up that goal.
Um, it's going to be something
that it continually works towards improving on its own
versus a traditional ai.
It doesn't have goals, it's just, you know, it's,
it's task oriented.
It executes on tasks,
but it doesn't really have a goal of where can I,
where can I take things in the future?
Um, they're both learning and adaptive, right?
That that's something that doesn't change.
(04:15):
They both have reasoning, right?
They've, they've been, they've built in some of the,
the reasoning responses.
But again, going back to that, um, discussion
around goal orientation
and productivity, is that the agent ais have the ability
to plan sequences of actions to achieve a goal achieve, um,
on, on a short level, going back to the analogy
that I used about, uh, you know, a journalist, its goal is
(04:38):
to go out and collect information about complex topics,
and then to synthesize those into a report versus AI is just
going to give you a dump of information based on
what it was able to uncover.
I think the last area that I'll share, um,
that I think is a big difference, which I find
to be very interesting, um,
is interaction with the environment.
Right now, the traditional ais
(04:59):
and, you know, large language models like GPT
or Quad, they really do process static inputs.
So it's a, you know, give and react system.
I give them a question, it reacts back.
I give it an idea, it reacts back.
The new agent ais can actually engage with external systems
and modify their approach and their behaviors dynamically.
And we're starting to see that in some
(05:21):
of the autonomous vehicles that are out there,
where it can learn from, um, experiences
that it has driving on a road,
and it'll modify that going forward.
So you're starting to see that, you know, kind
of implementing in the real world with systems that have,
um, you know, that can take multiple inputs.
So visual inputs as well as, you know, written inputs.
Large language models tend to be just you for the most part,
(05:43):
written with some pictures, versus now you're starting
to see videos, you're starting to see understanding
of the weather for some of the new, uh, systems
that are being, you know, developed for, call it, you know,
flight and, and aerospace systems.
So, you know, those agents can actually
in interact with the environment.
So that was a very long-winded answer for you there,
Warren, and, and sorry for that.
It's something I'm very passionate about, so you wanted
(06:04):
to make sure that we definitely
talk through some of the differences.
Yeah. Well, I appreciate you setting the stage on,
on essentially this, this definitional explanation,
uh, on the, these two.
And my question, uh, what I'd like you to do now is, uh,
take your crystal ball out and take the cover off of it.
Set it on the, on the table.
I know our listeners love, uh, data-driven answers and, and,
(06:27):
and data-driven perspectives.
And so I'd like to bring in the internal audit found
foundations vision 2035 research
and report that was, uh, put put out last year.
If our listeners don't have it, go to the, uh,
internal Audit Foundation's webpage
and download a free copy.
It has, uh, a lot of good information as we look
(06:47):
to 2035, right?
We look at the profession and,
and what are things that we're gonna need as we go forward.
So for you, Ethan, and,
and kind of the crystal ball point of view in our analysis,
in our research and our data, uh, we had a series
of questions around technologies and emerging technologies.
One in particular, relevant to this topic around AI is
(07:09):
that 48% of our responders in the research
are involved in AI activities.
And it's varying degrees of, of, uh, of activities,
but in some way, shape
or form, uh, involved in active in looking at,
uh, artificial intelligence.
Now, 48% is probably low, which is, uh, a,
(07:30):
a startling revelation, but maybe,
and some would say it's not that low, given
where we are at the time.
The research was done in 2024, and as we move into 2025.
Okay, so here's the crystal ball question for you.
How do you see agent AI transforming internal audit
functions in the next five
and five to 10 years as we look towards 2035?
(07:53):
Okay, well, we'll, probably, there's, there's a lot
of ways it's gonna transform.
Uh, Warren, you know, we've, we've, as you know,
we've been working with a lot of the large tech companies,
and so we've getting, we're getting an early preview
of what's going to be coming down the pike,
and I'm happy to share those examples.
I would totally encourage you to pause me as we go
through these examples, because, okay, you know, some
of these use cases are just mind blowing.
(08:14):
I would imagine that that 48% that's using ai,
it's probably some of the traditional LLMs, you know, chat,
GBT among them, such great tools, right?
They help you think through critical examples.
They help you learn, right?
It's like having a buddy there next to you.
That's great, right?
But what we're seeing coming down the pike is
just astounding.
So I'll start with my favorite one.
(08:34):
You know, at the beginning of the year,
we do a risk assessment, right?
Enterprise risk assessment, internal audit risk assessment,
you know, financial controls, risk assessment,
whatever risk assessment you're doing, you do
that at the beginning of the year for internal audit.
And it's usually a, a, you know, a periodic, you know,
type activity, maybe a survey,
maybe a conversation, an interview.
You know, maybe it's, we're looking at policies and data
(08:54):
and we're consolidating that into some risk assessment
to drive whatever.
Sometimes, you know, for, for internal audits, often
what audits are we going to do throughout the year?
And, you know, how are we aligning the risk with the audit
and making sure that we're addressing that appropriately
and getting feedback from management.
A gentech AI may flip this model on its head
for the simple reason that rather than having
to do a periodic audit, imagine having an AI
(09:17):
that could sit there and monitor communications,
have conversations with the executive team,
have conversations with the people in the field,
you know, consolidate that information.
We all interact with these ais on a consistent basis,
and it knows what the concerns are.
It can read the emails that are coming in and going out.
It can look at the transaction information that's going.
It's, it's really moving to
(09:38):
that continuous real-time monitoring
that we've been talking about for many years.
But because it's so easy for these ais now
to take a look at this unstructured information
as it's flowing around organizations, it can now begin
to get a sense of what are some of these real risks, right?
It can autonomously adjust risk scores.
We can give it a risk model
and say, Hey, here's what our model is.
(09:59):
And it can say, you know what? Based on
what I'm seeing coming out of our
operations, we need to adjust that.
Additionally, it can, you know, monitor for global threats,
um, emerging anomalies, macroeconomic con conditions.
I mean, it can incorporate a spectrum of data
that we haven't been able to fo
for many years into that risk assessment.
And really help us hone in on what is a big issue, you know?
(10:19):
And, and as we've started to see this happen,
it's actually uncovered things that, you know,
the humans might have missed.
Um, on the flip side of that,
it sometimes gives us information
that it thinks is a big issue
because people are working very hard on it,
but it's really not that big an issue.
So, a good example is, you know, some
of the early ais in the agent world that we've seen,
they tend to take an outsized focus on, um, some
(10:40):
of the compliance related activities
that are critical, right?
We need to go do SOX work, we need to make sure
that those financial controls are in place.
However, from a perspective, from an executive, as long
as there's no material weaknesses,
there's no significant deficiencies,
those are just normal course
of business things that need to be addressed.
And the real risks need to be more operationally focused.
And I know that that's something that, you know,
(11:01):
the IA is kind of moving toward, is how do we help
internal audit be more of the consultative partner
that helps address some of the organizational risks,
and, you know, try to make sure
that we're spending the right amount
of time on our compliance.
So, lemme pause there, that that was one example.
I, I think I've got like four
or five more if you wanna run through them.
Yeah, why don't I start with the one that's
(11:22):
in use right now?
Um, now it's, it's automated control testing
and exception handling.
Um, we have a tool called Comply ai.
It's gonna be in its second revision.
Um, you know, and instead of auditing samples
of vendor payments, you know, we're able to ingest that data
and we're able to
optimize our control testing procedures right away.
The future of that is going to be testing
(11:43):
of a hundred percent of transactions we're targeting the end
of the summer for automatically ingesting information
and applying those, uh, control tests on a continuous basis.
Uh, the firm, you know, know Grant Thornton along with some
of our, our tech partners
and banking partners, we released a, a report last summer
that was, you know, how to take advantage
of a hundred percent control testing
and you know, how to think about exceptions and issues
(12:04):
and detecting anomalies.
So I'm certainly happy to, you know,
uh, go further into that.
But I think that's, that's one of the key use cases
that we're seeing right now, and folks love it
because it minimizes that compliance burden
and enables folks to focus more on business operations
and operational audits.
I think the other, um, area
that we're forecasting in our world, um,
(12:25):
is looking at intelligent anomaly detection, you know, in
that predictive risk forecasting,
we've been talking about this for so many years,
and there's some very niche tools
that can do it in the IT space.
Um, you know, think about CrowdStrike, for example.
Um, there's some smaller tools
that can do it in like the accounts payable space.
If you think about some of the AP tools that are out there
for managing that, there's nothing that kind
(12:46):
of thinks about it globally or holistically.
And so over time, we're gonna see some of
that predictive risk forecasting happen with some
of these agentic AI tools.
A great example, um, you know,
you might see minor expense reimbursements.
You know, I remember we discovered this with a client, uh,
a governmental client a few years ago where there were, um,
(13:07):
$4 charges, or you just, you know, charges right around $4.
And I remember we did a forensic analysis of that,
and we uncovered that those charges were all
named after the Grateful Dead.
And we're like, well, that's really odd.
It turned out it was a massive scale fraud
that was being done in a micro level, right?
AI could do that in real time, rather than having
to do a whole forensic assessment, costs thousands
(13:28):
of dollars and takes hours and hours of time to do that.
AI can detect that as they're moving through, right?
Hey, why is, why is one
of the grateful dead doing $4 transactions
every Tuesday, right?
That's, it's, it's something that they would pick
up on that a human might not.
Maybe I would say the, the other one
that I think would be very, very interesting, you know,
we were talking earlier about some of the, you know,
concerns for, you know, humans jobs being replaced.
(13:49):
Um, I don't think that's gonna happen.
I think it's just gonna elevate the role that we do.
But I, I can see it at the point where we have, you know,
AI driven audit execution and reporting, right?
So where the agentic ais can actually autonomously perform
certain audit steps for us, like data extraction, um,
issue follow up, right?
There's no longer a need to send an email to our,
(14:11):
our folks in inventory, for example,
because the agent AI realizes that there's something missing
and it just sends an email and says, Hey, we noticed
that you were missing this large item in inventory.
What happened with that? Right?
So, you know, it, it's kind of moving away from
that one time periodic audit to just kind
of keeping an eye on things throughout the year.
And restructuring that audit is something
that's happening throughout the year
(14:33):
that the agent AI is keeping tabs on and giving us a report
and letting us know if it's seeing something
that's consistent or, or an issue.
So I think that's kind of, you know, where I,
where I would say is kind of the next frontier.
That's really like the, what I would say is five
to 10 years out where you have these ais actually performing
these audits on a continuous basis basis, you know,
with input from the humans and coming back and,
(14:53):
and reporting on things, moving away from that periodic,
that periodic audit type idea.
What I'd like to do now is, is move on in that, uh,
looking at the talent component.
And really in our, in our vision 2035 analysis, we did a lot
of research around what the profession's gonna need in terms
of technical capabilities,
and what will the internal auditor's background
(15:15):
of the future hold probably different than
what the internal auditor of the past has had in his
or her college education and toolkit of experience.
So with that stage set,
and thinking about the, the, the talent preparation of, uh,
future internal auditors, what do you think those that, uh,
are working in the world of ag agentic AI are going to need
(15:36):
to have in terms of skills and capabilities?
So that's a, that's a great question, right?
I, it's kind of funny. I see this changing dramatically
with our staff, you know,
and with students that are coming out of college.
I think the college, you know, as you think about
how it's going to become core to our operations
as an organization, as it will for everybody, some
of the key competencies that we're going to be needing
to think about, I call it AI literacy.
(15:57):
We used to call it data analytics Warren, which I know
that you're very familiar with,
but in this case, it's, you know, rather than having to
actually write the structured SQL
and to build the visualizations ourself, um, it's going
to be how do you interact with an AI to actually
generate the information that you need to, you know,
understand that, right?
So if I were to say that there's three
(16:19):
or four core competencies that you really need
to think about, one is AI fundamentals, which is
what model you're working with.
Are you working with a machine learning model?
Are you working with an agent ai?
Are you working with something that generates analytics?
So it's really just understanding the model itself.
Do you have to be a coder? No, absolutely not.
Um, you, you just have to understand how they operate.
So, you know, in what capacity do you actually
(16:42):
use the right ai?
It's, it's the right tool for the right
job situation, right?
You know, you don't bring a hammer
when you need a screwdriver.
At the same time, you don't use a machine learning tool when
what you really need to do is have something
that proactively generates your risk
assessments, which would be an agenda.
Ai. Where do you think we'll go for talent?
Where do you think the professional have to recruit?
You know, I, I think that it's going
to come from within, to be honest.
(17:03):
I think it's gonna be a learning exercise.
The, the metaphor
or the analogy
that I always use is think about the conversion from basic
ledgers, handwritten ledgers to spreadsheets.
There were some folks that just never got on board
with their, to use a personal laptop
and a spreadsheet spreadsheet.
There'll probably be some folks that just never get on board
with how to work with the ai.
However, the vast majority of folks
(17:25):
that were bookkeepers when spreadsheets came online,
learned, they, they're like, Hey, I need to know how
to learn how to use this spreadsheet.
It's gonna be the future of our industry
from an accounting perspective.
And they went and they learned it.
So I think it's going to be a training exercise.
It's gonna be a generational shift in many ways.
So colleges for sure are gonna do the training,
but I think other organizations like the I i A are going
(17:46):
to be very, very fundamental
to training folks how to use it.
Yeah, that's, that's a, a great, uh, layout of
how you're thinking about it.
And, uh, I know many of our, uh, fellow internal auditors
and chief audit executives are,
are having these dialogue conversations
right now as we go through this.
So a very relevant response, uh,
(18:07):
as we continue down the path of, uh, being very, very
compliant oriented and strategic advisors
and finding the balance there, right?
In our vision 2035 analysis, uh,
and report, we did a lot of research around
where we're at today on assurance or compliance work and,
and where advisory sits in today's plan,
(18:29):
if you will, and where do we need to go?
What would be the ideal future?
And so today, uh, we're at six 76% doing assurance
or compliance work, and 24% doing advisory work.
The ideal future, based on our research,
the respondents felt that assurance would come down down
to 59%.
So kind of the compliance work would fall
(18:49):
and advisory needs to increase to 41% or more.
So that sets the stage for this question
as we look at the importance and,
and really the, the request of stakeholders,
management teams, and even our own internal audit leaders
for internal auditors
to become strategic advisors in the future.
(19:10):
How can AgTech AI help auditors transition from compliance
focus roles to more proactive advisory type positions?
That's a good one. So, a few ways, right?
So obviously we talked about some of the automation
of risk assessments, right?
You know, one of the other areas that, you know, there's,
there's a, there's a thousand use cases, right?
(19:31):
And we're seeing this happen more and more consistently.
So a lot of our major clients are looking at how
to automate controls testing,
because it's compliance exercise, a lot
of it is route in terms of it's, it's, you know,
very consistent year to year.
It does require some logic and some reasoning,
but some of our tools like Comply, I can automate a lot
of that, uh, right now.
And there's other tools that are out there like N eight N
(19:51):
that can automate a lot of that work
and start to bring that workload down
and help managers, especially IA managers,
reorient their staff towards more business focused function.
So that's really, you know, the, the big help is taking
that workload down in the areas that they don't want.
And there was, there's been a very
unusual side effect of that.
As we bring more AI to the table, um, one of the things
(20:14):
that we've discovered is that it actually increases the
satisfaction of employees with their jobs.
They're happier to be doing the work
that they're actually doing, because they can use the AI
tools to automate a large portion
of the things that they don't like.
I mean, I don't know about you, Warren,
but how many of us actually really enjoyed formatting our
documents for grammar and consistency.
It was one of those things that, you know,
(20:34):
it was a former engineer myself, I really hated that stuff.
Mm-hmm. Yeah. Um, so you know, now that AI can do that
for me, and it does it in seconds.
And so, you know, that half hour of my day is now, you know,
freed up for those things that I'm really interested in,
which is going out
and taking a look at what real risks
are to the organization.
So freeing up, freeing up the time of the assurance
(20:54):
and compliance work by some of these tools is great.
What's an example of where you think
that Agen AI could plug into
the strategic side of what we do?
Ooh, that's a great one.
How about I give you a good example of one that I actually,
you know, know about, um, Warren.
So in this case, one of the big challenges for one
of our retail clients was, um, demand fluctuations, right?
(21:16):
They, they didn't understand, you know,
where is the demand coming from?
They're having a hard time forecasting,
and they weren't sure exactly like how
to correct for that issue, right?
They need to be able to forecast budgets,
but demand was kind of all over the place.
And so, you know, internal audit stepped in
and said, why don't we do a, an audit
of the demand forecasting,
(21:37):
and we'll apply AI and see what happens.
And, you know, what they discovered was
that the AI tools were really good at not just helping
with the compliance monitoring aspect,
but with also, you know, helping to predict some of
that customer demand fluctuation.
Where was that inventory going to be needed?
When was it going to be needed?
And it was an interesting side effect,
and it was really one of those things where they're like,
(21:57):
wow, internal, a lot of just solved our biggest problem,
which was, how do we actually help with demand forecasting?
And it was a side effect of them coming in
and doing an audit of something
that had been a challenge for them many times.
And, you know, that's one of those areas where, you know,
by being skilled in this space, that retail client was able
to say, intra audit is truly a business partner.
'cause they brought AI to a consistent problem
that we've had for a long time.
(22:19):
Yeah. That, that's a great, a great example.
And I hope our listeners, uh, can, can reflect on
that a little bit and, and apply it
and broaden it to, to other areas
that might be relevant in their shops.
I wanna talk a bit about kind of the ethical
and risk consideration aspect of this.
So a little different perspective,
a little different conversation, uh,
but auditors nonetheless, right?
(22:40):
We're using these tools and we will use these tools,
but how can auditors ensure that accountability
and transparency and reliability is in place?
Not necessarily, we're not lawyers.
So I know we can qualify that
and say, now we're not gonna get a legal response
to this question, but let's get a practical answer
for internal auditors.
So, Ethan, what do you think? That's a great one.
(23:03):
Um, people always jump, there's,
there's three key risks that I'll talk about here.
One of them is the one that everybody always thinks about,
which is AI bias and data integrity, right?
AI learns from historical data. You can contaminate it.
There can be biases in the data
that can be biases in the model.
You know, if it's trained on flawed data, it, you know,
it could misinterpret risks
or, you know, misprioritized controls.
(23:24):
You know, the way that you mitigate that, really, that bias
is, you know, just regularly going back
and checking the data that you have underlying for accuracy,
fairness, you know, making sure you're removing it.
And really, the biggest thing, and,
and Microsoft does a great job talking about this in their,
you know, responsible AI framework.
It's, they call it the human in the loop,
but it's really human oversight.
(23:45):
You cannot take the human out of the equation.
And even with the Gen ai,
where there's reasoning capability, you still have
to have a human look at the result
that say it doesn't make sense.
You know, it, there always has to be that human loop.
That's, that's the risk that everybody always jumps to,
you know, from, from an internal audit perspective, the one
that I always see is the biggest risk from my perspective
(24:05):
is the lack of explainability.
We are really good at about explaining why
a problem is a problem.
But when an AI finds a problem,
it has a really hard time explaining why it's a problem.
And so there's some new tools that are coming out, um,
you know, that help explain reasoning, right?
You can use reasoning models that show the steps
that the AI went through to, to, to accomplish its goal.
(24:27):
Um, you can look at detailed audit logs,
that's really the risk mitigation,
and you can flag things, you know, to, to make sure.
I, I don't think we've really figured out how to mitigate
that risk really, really well yet.
That's one that's still evolving,
and that's one that we're going to have to work
with the technology companies to really help us, you know,
remove that black box type issue.
(24:47):
I think the third big risk as an internal auditor
that I worry about is weakening our auditor's judgment.
Uh, you know, really that over reliance on AI that just,
you know, that, that, that makes me nervous every
day as a, as a principle.
And so, you know, we don't want people blindly trusting ai,
we don't want them to forget that there's, there's a nuance
(25:08):
to the risks that require human judgment, right?
And, and not fully, you know,
theis don't fully understand the business context
or the ethics associated with a given situation.
So, you know, again, that goes back to
that human in the loop, um, solution.
You need to make sure that there is a human
that is reviewing the results of the AI's work, right?
And, you know, a great example, we do a lot
(25:28):
of vendor risk management, right?
If a, if a, you know, vendor gets flagged as low risk,
for example, um, you know,
a seasoned auditor might realize like, Hey, you know, just
because you flagged that vendor as low risk,
I know what's going on in the market
and I've had problems with this, you know,
client in the past before, and it's using bad data.
We need to change that flag
and make sure that this is more of a medium
(25:49):
or high risk type vendor.
Because you just have that experience
that's been in place for many, many years.
The AI might not be able to explain it,
but the human knows intuitively based on past experience.
This is not a vendor that we need to just trust blindly.
Let's stay on that track of, uh, the human. Okay.
So, uh, everybody's thinking about this.
I know our listeners are, are thinking about this and,
(26:10):
and really the, the whole world is thinking about this with,
uh, this continuation around, uh,
artificial inte intelligence
and like the, the growth of this.
What do you think about the human internal
auditor role, right?
That role. Uh, is that role gonna go away?
Is that role gonna be reduced?
Will there be less internal auditors,
(26:31):
less human internal auditors?
What do you think is going
to this is gonna look like in 3, 5, 10 years?
That's always the tough question, right?
I think everybody's always said, you know,
will new technology replacement?
Will there be something different?
You know, I think the analogy I use on this
one is the internet, right?
The internet came along.
There were people that were out there, you know, manning,
(26:52):
you know, phone lines and doing data transfers.
I mean, you know, for anybody that is an auditor,
we remember that, you know, people used
to take these big tape libraries as backups,
and they used to like run them out
to like a storage site, right?
Big physical issue. The advent of the internet,
and then subsequently cloud systems.
There's no longer a human being that has
to run those big tape backups from one system
(27:12):
to another any longer, right?
It automatically gets backed up.
That control has been changed similarly with ai, you know,
and, and then again, to your point, Warren,
this is Ethan taking out his crystal ball right?
In, in human history. You know, have people been replaced?
Sure. But, you know, has something better come along
to help them adapt to that change?
(27:32):
The answer is, yeah, new jobs came along.
You know, there were no network engineers in 1985, right?
But you know, today there's thousands of them,
or hundreds of thousands of them
that help maintain the internet, um, for the same degree.
Is there an AI enabled internal auditor today?
Not to a limited degree,
but is that role going to evolve into something
that we've never even thought of?
(27:53):
Probably. And I think it's going to be the ai,
it's gonna be the, you know, organization like the I I A
that's gonna help folks kind of move into that new role.
They're gonna have to learn how
to do something different, right?
And adapt into that. You know, again, going back to that,
there's, we're gonna have to train folks in AI fundamentals
and have AI driven work.
But I will say this, for the early analysis
that we have done, and for folks taking, you know,
(28:16):
adopting these tools, what we have found is
that it has actually elevated their role
to be more critical thinking
and more problem solving, right?
It's gonna be a lot less of the documenting and defining
and defending of our results
and more of the how do we just move into the problem
solving, which is really where we wanna be anyway.
(28:37):
It, it's, it's where folks really want
to, uh, spend their time.
I think one thing it's also going to do, Warren,
and this is gonna be the most interesting piece
as a discipline internal audit, is going to have
to become multidisciplinary, right?
No, we're no longer gonna be just experts audit,
maybe financial and it, we're gonna have
to know about operations.
We're gonna have to know about, you know, FDA compliance.
(28:59):
We're gonna have to know about hipaa.
So it's going to be very interesting how we're going to have
to evolve into a much broader knowledge base going forward.
Probably have a little less optimism than you
have in that, in that response.
That's probably the way to say it.
And, and I'm an optimistic guy.
You've worked with me for a long time.
Uh, but the speed at, at
with which this technology has advanced,
when you look back over, you know, the recent past,
(29:22):
so 50 years, 70 years of history, even a hundred years
of technology, the speed
of this technology advancement has been extremely fast.
And so the other technology advancements I felt allowed,
it just took a lot longer to perfect.
And so there was a lot more gradual
gradually in the transition of automation to human.
(29:45):
And there was more time for, for humans to,
to find meaningful roles and enhanced roles.
My concern with this is that this is just so fast.
This agent learning is so fast.
This autonomous development is so fast that, uh, the volume
of work that, uh, that these tools may be able to take on.
I worry that the gap that it's gonna leave on the human side
(30:05):
of the roles is gonna be faster coming
and lar much larger to fill.
So therefore, what, where does at least some
of our internal auditors, and,
and probably the answer somewhere in the middle of me
and you, but I certainly do think that, uh,
everything you said around, uh, the skills and and,
and technical capabilities of, of people looking
to be in the profession needs to be wide and varied
(30:28):
and has to have a very sharp eye to the advancement
of all these technologies that you talked about.
That's a, that's a great point. Yeah.
Maybe, maybe I'm overly optimistic,
but one thing to think about though, Warren,
and this is one of those things that
I love about internal audit.
The companies that have adopted the artificial intelligence
tools rapidly, they're starting
to see something interesting happen with their staff.
(30:49):
Their staff are, are getting placed into executive
or leadership roles within their organization.
Because what the internal auditors end up learning isn't
necessarily the core of how to do internal audits.
It's the core of how to run the business
and how to run it well.
Because they have to become those multidisciplinary experts
where they're like, well, how do I do demand forecasting?
(31:10):
Going back to the retail example, the internal auditor
that end up, you know, figuring that out,
ended up going into the fp
and a role to help with that demand forecasting
and got placed into a very high level role
that normally would've taken four
or five years in an MBA from a prestigious school.
And suddenly they were thrust into that role right away
because they figured out how to do it very quickly.
So I think, you know, there's, there's some interesting, um,
(31:32):
learnings that are happening, which is now internal audit is
no longer just a profession in and of itself.
It's now a launching pad for, you know, leadership,
which is something that's really unique.
And I do agree with that. I do agree with that,
that comment and that sentiment.
And I am seeing that as well, uh, in the market.
What is the advice that you have for internal auditors
(31:52):
who are skeptical about ai?
Well, you know, I think that that's a very simple one,
and that's just go play with it.
There are lots and lots of avenues to play
with AI tools for free.
Every single vendor that's out there
that's pushing an AI tool will give you pretty much a free
instance of it to work with.
(32:13):
There's a few limited ones that won't,
but, you know, the cheapest
and easiest way to do it if, if you have not gone out
and played with GPT or Claude or co-pilot
or any of the, you know, relatively inexpensive tools
or free tools out there, go out and play with it.
See what it can do for you, right?
At a very minimum, try something different.
Ask it to help you think about risks in the market relative
(32:34):
to the organization that you're thinking about.
You don't have to put any key data in there.
I'm not saying go put organizational data,
don't put anything in the public models, right?
We, we wanna be sensitive to data,
but start to think about how you can use these tools
internally within your organization.
I, I would say that, you know, you, you noted
that 48% have already started working with it.
I would be willing to bet
that most organizations have opened up the ability to play
(32:56):
with an AI tool somewhere within their organization
that you can take advantage of.
And, you know, within that model, you should try it out
and give it a shot and see what happens.
Um, you know, there's some great things out there
that will make your life easier.
One of my favorite ones is a tool
that we use at Grant Thornton called flow.ai.
It will automatically build a flow chart in Visio based on a
(33:18):
transcript from a walkthrough, right?
A very common thing that we as auditors do.
We do a walkthrough, we build a flow chart,
we wanna know what the process looks like.
Um, there's a lot of tools out there that will do
that automatically now, and it will
make your life so much better.
I know about you, but I don't like drawing little boxes
and arrows any longer in my old day, Warren, um, you know,
I I I'm very happy to let a computer do that for me,
(33:38):
and I can go make corrections on it, so,
yeah, no, absolutely.
And, and you, and you know, your, your advice about, uh,
go out and play with ai, uh,
and get, get exposed to it is spot on.
I've been in the profession 35 years, so,
you know, people can do math.
The listeners can figure out that when I went to school,
you know, I, I grew up in my professional life very,
(34:00):
very differently than the way
college graduates today are growing up.
But I went out and, and played with AI and,
and have done a number of different things, uh,
in my personal life, whether it's, uh, planning a trip, uh,
that used to take me, you know, a week on and off, back
and forth to do a lot of research
and loaded it into, uh, an AI engine.
(34:20):
And, you know, within three minutes,
spit out a seven day itinerary.
Very attuned to my, my style, my approach, uh,
must have searched where I like to Google
and where I like to search on hotels
and restaurants and other things.
And certainly the, it it gives you that aha moment, uh,
when you see the speed at which this very accurate
(34:41):
information comes back.
And it just saved an immense amount of time.
Uh, I've also used it to help me with, uh, slide preparation
and getting ready for, for speeches and,
and preparing slides that normally would've taken
a marketing department probably a week to create.
And, and these slides are created in a matter of moments.
So that's just a couple of examples where I've, uh, played
around with it to quote you, Ethan,
(35:02):
and has given me certainly comfort
and kind of that third dimensional feel on what this can do,
how this can help, and when targeted and harnessed
and deployed in the right way, uh,
agen AI can certainly help drive
future success for all of us.
So, you know, in closing, uh,
do you have any final remarks you'd like to leave
to listeners with today, Ethan?
(35:23):
Well, I think, I think we kind of hit on that final remark.
The one thing I will say is that, you know,
it's an opportunity to learn.
One thing I would say about AI that I find the most fun is
that it is like having your own personal professor right
there, there with you, with it is.
It really is. And so, you know, if anything, if, if you need
to go play with it, set a goal
for what you wanna learn about.
(35:45):
I never knew that I had an interest in molecular biology
or physics, you know, especially
as a career auditor myself, Warren.
But it turns out that, you know,
because I have clients that work in those spaces,
it's spaces, it's a great learning tool
and have some fun with it.
You know, go learn something fun.
It'll teach you and it'll be like
having a professor right there.
Great conversation. Thanks Ethan. Appreciate it. Thank you.
(36:09):
Are you concerned about security in the age of ai?
Join the I'S 2025 analytics, automation
and AI virtual conference on April 24th.
You can hear from industry experts
how cutting edge technology is transforming internal audit
by securing your spot and registering today@theia.org.
If you like this podcast, please subscribe and rate us.
(36:31):
You can subscribe wherever you get your podcasts.
You can also catch other episodes on YouTube or@theia.org.
That's THEI a.org.