Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Hello and welcome to another episode of the Data Revolution podcast. Today, a repeat guest,
(00:09):
old friend Gladwin Mendes, welcome. Thank you for having me again, Kate. Pleasure as
always. Now, I'll just remind people, Gladwin is an executive director, a director. He's
a data executive with a lot of experience, and he's now involved in fractional chief
data officers, which is one of the ideas that I've been fascinated by and wanted to have
(00:32):
a bit of a chat about. So, over to you, Gladwin. Tell me a bit all about your new business.
So, look, it's a business that a close friend of mine and I have been talking about for
a while, and this opportunity to really make an impact to small and medium-sized businesses.
(00:53):
And if you actually look at the stats in Australian census information, I think it's something
along the lines of 87 to 92% of businesses in Australia are small to medium-sized organisations.
What do you mean by small to medium-sized? You'll have to look at the literal definition.
(01:16):
I think it's under 10 million or something. I think it's 10 million turnover, anything
under 10 million. So, that's the vast majority of businesses in Australia. Now, you've got
this other interesting push or pull, as I call it, around AI, where people, directors,
(01:39):
executives are being told, if you're not using AI or being a data-driven business, you're
going to be out of business next week, right? And then you've also got other directors and
executives going, we're using AI and people who are not using it are freaking out. We're
not going to be around. We have challenges, the macroeconomic challenges and headwinds
(02:00):
we're facing into. So, with that in mind, the concept that's been quite popular overseas,
especially in the US and the UK and Europe, was this concept of fractional chief data and
analytics officers, where you might be small to medium and you can't afford a full-time
(02:22):
data executive like my friend Greg Tang and myself. But you want to still keep pushing
along and get your foundations. You want to be data-driven. You want to be able to leverage
these AI capabilities, but just need the expertise. You need something to move you along. That's
(02:43):
where we come in. So, we would go into an organization, maybe one or two days a week,
and the key focus now, it's different to consulting. We say consulting is a dirty word. The whole
idea of getting us in is to uplift the capability within that organization, where it might be
(03:06):
identifying a person who is a little bit more junior, but could be groomed and could be
trained and upskilled to be their future chief data and analytics officer, or provide those
skills and re-upload those capabilities. And yeah, that's it in a nutshell. We would come
(03:28):
in and teach that organization how to fish rather than sell them a fish. And if anything,
a good exit strategy for us is actually we are looking always to be moving on and having
that organization uplift their capability and being able to get them on that journey and
(03:48):
moving along and get the momentum going. So, yeah, that's the big outcome that we're looking
for is increasing and uplifting capability one or two days a week.
It's something that kind of started off in startups over in Silicon Valley with chief
financial officers, where small startups didn't need a full-time head of finance, but they
(04:13):
needed some strategic guidance. They needed some capability uplift. So, I remember them
from like a decade ago over in the Valley. So, that's a really interesting thing. And
I think a lot of organizations now with the pressure to become data-driven are trying
to work out what to do and how to do it, which is a real challenge.
(04:35):
Exactly, Kate. Look, and you've probably experienced the exact same challenges I've seen where
no kidding, I've had directors come to me in previous roles and organizations go, we'll
take three AI. And it's like, we kind of go, that's not how it works. And I think there's
that significant, almost a three-prong approach, data literacy, technology, digital literacy,
(05:02):
and now AI literacy that is critical, needs to be brought to organizations, but they generally
can't afford to easily be able to get that by hiring someone full-time. So, it's exactly
like you said, coming in, helping an organization grow, and then moving on, or continuing on
(05:22):
with that organization over an extended period of time. But the idea is to exit at an appropriate
time. And that, to your point, might even be, might require and involve us in hiring
a person for that role so we can move on.
And it seems to me that a lot of organizations are not realizing that to get their three
(05:47):
AI's that they want to buy, they need to do their plumbing and get their data in order.
So they need to get their data governance right, they need to get their data platform
right, they need to get their data pipelines right. So that usually takes, and that's why
I think this idea of a fractional CDO is such an important one, because you need somebody
with gravitas who can have the conversations with the board and the C level and help them
(06:12):
to understand it, because a lot of times organizations just don't have the mental map of how to get
from where they are to where they want to be to be able to do things like AI.
You're a spot on, Kate. You're exactly right. And I think people are getting caught up in
a lot of the push and the hype and that fear of missing out without actually realizing
(06:39):
that just like anything else, and I was actually quoted in a thought leadership piece where
I went, look, the old adage holds true, garbage in garbage out. That held true for your data
governance, the things that make people's eyes glaze over, but are so much more critical
(07:00):
for AI. It's, you know, it was expensive when you didn't do it previously. It's even more
expensive now with AI. There are many examples out there where AI have not delivered on its
promises because of a number of different charges, everything from that lack of alignments
(07:23):
and business objectives, you know, all the way and under saying the board directives
and strategy and initiatives all the way down to, you know, the nitty gritty that the quality
wasn't there, debt equality wasn't there. It was really funny. I was having a chat yesterday
with a colleague in the US, Dr. David Bray, and we were talking about the real challenge
(07:45):
with generative AI is businesses don't really need generative AI. The question is, does
anyone really need generative AI? What we really need is something more deterministic
that can give us consistent answers, but with the flexibility of generative AI, but more
deterministic. And, you know, a lot of people are working on that, but we're trying to sort
(08:07):
of shoehorn generative AI and agentic AI into a deterministic pool by using RAG models over
the top.
It's a typical silver bullet approach in an arcade. It's like, oh, this will solve all
our problems. It won't. It's got to have the specific use case and those guardrails and
(08:27):
process and framework in place to deliver and deliver ROI and value.
Yeah, it's really fascinating. I've been talking to a lot of board directors, you know, who
are really concerned about how to use AI in their business and what should they be looking
at and stuff. And literally, they have no idea. Like, they're trying to govern the businesses,
(08:50):
but they don't have an idea. And then I talk to the C level people and they typically don't
understand the power that AI and data can unlock their organisations. And there seems
to be a real disconnect. So it's really interesting that we can, and I see this as where the Chief
Data Officer role can provide input to both of those groups that can really help them
(09:16):
unpick this so they can start to see the strategic value and start building the strategic plans
to the organisations that can then roll down to the tactical operations.
100% agree, Kate. I think it is a critical role and responsibility for a Chief Data or
Analytics Officer or Chief Data and AI officer. The new roles are coming out now, just like
(09:43):
everything else. But it is a critical aspect. If you think and take from the leaf out of
the book of security, in the CSO space, culture and education is key. Likewise for AI, education
and AI culture is key. It's interesting because there are already stats coming out that I've
(10:08):
seen recently where they're saying that actually the majority of people using AI aren't actually
AI literate, they're just using it without the right knowledge. And that if anything,
if people are and wear more literate with AI, they've actually found people who are more
(10:29):
literate with AI use it less. So it's almost like the polar opposites. You're almost going,
hey, look, if people are more literate with AI, they use it more. What some of the stats
coming through, people are using it without the understanding of the potential risks,
the leakage, the hallucinations and so on, which can have huge ramifications. But on
(10:54):
the flip side, once you do get people up to speed, they use it less. But I would say
they use it more effectively and safely.
The thing that I always worry about is, and this is why I was talking about the deterministic
approach is more focused on business. Because as a business, you want to give people the
right answer and you want to give them a consistent answer. So one of the big things at the university
(11:17):
is people used to answer shops. So students would go to different parts of the university
and ask the same question until they got the answer that they wanted, because we didn't
have a harmonized set of data that people could draw on. And that often happens. And
we've had that famous airline case where a chatbot hallucinated an answer and gave a
(11:39):
customer the wrong information. So we really hallucinated.
There was a costly mistake by understanding too.
Yeah. But, you know, so even with the use of RAG models to try and add some veracity
in, it's still a hard problem. And having an understanding of these risks at an organizational
(12:01):
level is so important because I just feel that there isn't a very strong conversation
about the risks of AI at the moment. People are so, I think in terms of the Gartner Hype
cycle, we're still up at the heights of the hysteria and we haven't come down the other
side to work out how to use it safely for the sun and profit.
(12:22):
I agree. I agree. Look, I think there's certainly an aspect of that. People are not understanding
the risks. Now, I was involved with some discussions once with the Chief Technology Officer at
an organization, let's just say it was in the line's space, so utilities, electricity,
(12:43):
so high risk. You know, health and safety would have been topical and top of mind for
the directors of this organization. Now, the Chief Technology Officer at this organization
was, hey, look, let's go for it. You know, we'll use generative AI. We will use RAG.
(13:05):
We'll surface up service manuals to our employees, so the service technicians on the road working
in the power lines can go ahead, bring up the information. I had to go, whoa, whoa, whoa,
whoa. I was advising and said, can we step back? You're laughing, but the thing is,
(13:25):
the Chief Technology Officer. That was because I was like, oh, no, don't
do that. You know, from a director standpoint, and me as a director with health and safety,
the directors are liable for health and safety incidences. So you can imagine, when I had
to pull back this Chief Technology Officer and go, okay, let me pose this to you, is
(13:51):
how good is your data quality, first of all? Like your documents, do you have old outdated
documents potentially? Oh, we're not sure. And I was like, okay. And I said, let's just
come up with an extreme example and say that the system is 90% accurate. You'd say, generally,
that's good. But it's like, okay, 90% accurate. You've got people out working on lines, and
(14:16):
that 10%, if they get old outdated information and someone gets electrocuted and killed,
whose neck is going to be online? Director's, and then your neck is going to be on the line.
And you should have seen the CTO's face drop and go, oh my gosh, I didn't really think about
that. Right? You know, we still have a way, like you said, getting through the hype cycle,
(14:42):
being able to minimize the risk of hallucinations, being able to deal with potentially having
issues with data quality. Those issues, data quality issues have not gone away. Hallucinations
now coming in, there's still a way to go. And, you know, I guess you can put some controls
in place, human in the loop and so on. But yeah, like I think it has to be the use case
(15:08):
that's sort of low risk, high return on investment and so on. And then work through properly
through a proper governance mechanism before being rolled out because everyone right now
is just running, running with it because again, the FOMO, the other directors are saying,
we've implemented AI, let's do it, you know, we're trying to keep up with the Joneses,
(15:31):
etc. All those things need to be, needs some balance.
Well, I was talking, I've been talking a lot about the risks of actual AI. So what are
the cyber risks to AI? And, you know, there's the poisoning of the training data. There's
the poisoning of the models, and there's the insertion of bad commands into them. And
(15:57):
I did a talk late last year to a bunch of cyber professionals, and that wasn't even on
their radar as a thing that was a threat. And they were like, a whole lot of them came up
to me after they wanted to talk to me about it because they had not even started to think
about that as a threat. And if they're not, if the cyber professionals aren't even thinking
about that, that means that people in the business aren't even being told that this
(16:19):
is a potential risk.
Yeah, so exactly. So I would say AI should focus, well, AI should be on every organization's
risk matrix. It needs to be assessed and the board needs to assess it and have plans on
how they're mitigating the risks. It's interesting you say that because I chaired a conference
(16:44):
a year back that was for a whole bunch of security people. And I made the comment that
data people don't understand security, and security people don't understand data. And
you should see in their faces all scrunched up and then stop and think about it and then
kind of nod in agreement that, yes, okay, that's the case. Look, the majority of people
(17:07):
are not saying all people having been responsible for security in the past. It is security on
it and cyber on its own is complex enough. Data now AI on its own is complex enough.
It's that bridge and how do you ensure that you are using agile integrated, you know,
(17:32):
have sec opt approaches to bring that together and ensure that people are being included.
What used to be and you'll know this case, what used to be shadow analytics is now certainly
shadow AI where are using information. So you need to essentially go and we used to
(17:52):
have a saying and I used to use the same to the board and report on security is I'd go,
it's not a matter of if we're going to get hit with a security event. It is when and
how we can recover from it is the key thing. Likewise, again, that education that literacy
(18:15):
and people understanding, hey, you do not put confidential information into these open tools.
You do not put board papers into chat GPT. As a director, I've seen some examples being
used and I was like, they really should have caviated that if that's what they meant. But
(18:36):
again, you know, some of the most confidential information potentially going into these large
language models. Oh, the number of organizations who put notices out this week to their staff
saying don't use deep seek because all the security problems with that is hilarious.
Like emails. Look, I think is a and look, this is just my personal opinion. I think
(19:01):
there's a lot of focus on deep seek. But the best significant leakage in American based
tools also people I think are very quickly forgotten about that example in Australia
off. I think it was a department of corrections when external and this is free available online
(19:25):
and external consultant had used chat GPT in the space of sexual harassment and had gotten
a page and a scenario pulled together using chat GPT. Now when she went and presented
this to this government organization, someone actually stood up and said, Hey, I'll stop
(19:47):
you there. What you've actually used is a real ongoing case. Have you heard? Have you
heard? And so that just shows the risk of leakage even with Western based tools. A lot
of people have shown it's showing the data literacy problem that we that you mentioned
earlier. And it means that data literacy really needs to go up. People's to do this.
(20:12):
Exactly. Data literacy, it's like a three data, AI literacy, digital literacy, all of
those go hand in hand. And that's where I don't give them security literacy. That's
where I strongly feel and I'm actually chairing a security event next week. I strongly feel
(20:34):
as executives, we need to work more closely together. The chief digital officer, the chief
data officer, chief information security officer, we have such huge responsibilities. And with
that, the huge responsibilities come to this huge risk. And we need to work together collaboratively
(20:59):
to protect our organizations and communities. Yeah, well, you know, I've made sure to bring
that kind of coalition together at the university. And you know, the CISO was was, I basically
joined at the hip. And one of the things that we did was a joint exercise, just a tabletop
(21:20):
exercise to walk our senior executives and some of our board through of the impacts of
a breach of our systems, you know, a simple thing, a system goes down on Friday, and just
play it through to you've lost it all. And if that would have happened when we were enrolling
all of our students, but they would just go to the next university, if we fell over, they
(21:44):
would just go to the next place. So we would we would the risk to us that we would lose
that revenue for that year for all of those new students. And it was funny watching around
the room to see the dawning realization on everybody's list that these boring systems
that nobody is interested in are actually really fundamental to the revenue of the business.
(22:07):
And nobody nobody had joined those dots really before.
Yeah, I'm looking that thing that it sounds really good, you had that collaborative approach,
but that's what it takes. I always used to say security is everyone's responsibility.
Likewise, privacy is everyone's responsibility. Data is everyone's responsibility. And now
(22:31):
is a hey look, that literacy is also everyone's responsibility with AI. Because all of those
things are so closely intertwined.
One of the interesting things we did in 2022 was we did some work with all of our teaching
staff. And one of the things that one of the consistent bits of feedback they gave us so
(22:55):
was to data and IT and cyber. We don't know where the rules are that we have to follow
up. So, you know, we had a really fragmented policy space because organically, you know,
cyber did their policies and then data did their policies, privacy did theirs. So everybody
had done this organically. And so we actually decided to bring it together under a new information
(23:16):
governance policy with different verticals, but under that and to make it all personalised
to roles so that if you're a teacher, you can just pick up and go, here's what I need
to do across all of that. And that's something that I think is really worth doing for organisations
because that fragmentation, which made sense at the time because, you know, cyber didn't
(23:38):
exist now, it needs to exist and we've got to tell people how to do it. And then the
other thing was that the university was doing this harmonisation and minimisation of its
policies and they didn't want us to tell people what to do. And I mean this, I said, we're
just going, I'm sorry, in cyber, we're going to tell people what to do. We're not going
to give them a choice. This is not negotiable. And that is a big mind shift for a lot of
(24:03):
organisations where, yeah, we're going to tell you what you're allowed to do to protect
the organisation. We're not going to not tell you what to do.
I'll look at you, touch upon a number of interesting points in that. One of the first, I'd say,
look, as the CIO in a previous organisation, what you're talking about simplicity is the
(24:27):
ability to, and we have the same thing, we have four or five different policies. And
I sat back when I came on board and I looked at them and I said, I was reading this. I'm,
you know, I'm already being on board. I've got so many different things to do. You want
it to be simple. And we literally consolidated, likewise, consolidated all these complex policies
(24:51):
into just an information policy. And when, look, from something like eight pages down
to literally two pages, and use simple principles that literally my grandmother could read and
understand. But again, I think people fall back into, I need to be really comprehensive,
(25:13):
and I need to be, you know, to the nth degree in legally binding. So literally, it just
over complicates it. It's like simplifies it and use principles that people can understand.
When I was in leading a privacy programme at one of the large banks a number of years
back, I remember having a machine learning engineer run up to me and thank God he proactively
(25:37):
reached out to me. But he ran one of his machine learning use cases. And let's just say it
was in vision, like facial recognition. And I sat back and I was horrified, but I let
him finish. And I went, I went to him, like, look, thanks so much. First of all, it's
the carrot. And it's like, creating a culture where people can come to you as critical.
(26:00):
And thanks so much for coming to me. How about I play this back to you in a different way?
If I was, if you win the shoes of this customer, and you knew this was being done to you, how
would you feel? You sat back, thought about it and went, Oh my God, I'd be horrified.
(26:21):
I said, there's your answer. And just keep it simple, simple principles, you know, things
like it just because you can should you, you know, those sorts of things help me, things
in the bubble and cuts down complexity. Life is complex enough as it is anything we can
(26:41):
do as leaders to make it more simple for our boards, for our employees, for our communities,
we should be like some of the, like if you go look at the privacy policies for most organizations.
And people joke about certain technology companies reading their terms of conditions. Has anyone
(27:02):
ever really read them? They're horrifically complex for a reason. I think we're doing
ourselves and our customers and our students and communities are to service. We should
be simplifying it as much as possible.
And that was what I was going to interject to before. So with our cyber policies, we
(27:22):
read them and we're like, they're unintelligible to normal human beings, but they need to be,
they need to be in the jargon for the people who need to know that. So what we actually
negotiated with the cyber folks was to write a preamble that explained for normal people.
This is what you need to know in plain English. And if you are a technical person, you need
to go and do stuff, you go and read the rest of it. And the jargon is relevant and meaningful
(27:45):
to you. But if you're a normal teacher or a normal administrative worker, this is what
you need to read. This is what you need to know. So that separation between the people
who need the technical detail and the need to know stuff is an important principle to
think about.
Yeah. And I think another overlapping principle to consider that is don't try and go for perfection.
(28:09):
80-20, just look at everything. 80-20, if you're capturing 80% of the people, then that's
great. Go with that. And yeah, look, if you have any questions, create an environment where
people can reach out and go, I'm not quite sure about this. And like you said, have the
more detailed information there. And if they still can't figure it out, come through and
(28:30):
then let's have a discussion. But if you're going for 80% of people, then that's great.
Trying to get 100% and making it so complex that only 1% of people really read it.
Yeah. Right. And I think to that, I've done some work with an organization a while back
where they literally had the policy of, hey, look, actually we'll do ethics, privacy, and
(28:56):
so on once every couple of years. And I said, no, this is such a critical thing is also
adjusting how frequently you do these e-learnings and training, so seminars for your employees
and communities to go, hey, look, okay, things are changing, things are evolving so quickly.
(29:16):
How do you update and keep it relevant for your stakeholders so they understand that?
I think that's one of the big challenges, especially in large organizations is typically
get your training when you're on board and then you don't get it again. And we were kind
of experimenting with trying to work out how we can deliver just-in-time nuggets of training.
(29:38):
So if you're about to do a project, then you get the, here's what you need to know about
doing projects and here's what you need to know about privacy for projects, data governance
projects. And that was something that we were exploring. And I think it'd be worth other
organizations trying to think that way rather than tell them when they start when it's not
relevant so they won't remember it.
(29:59):
So to the project standpoint, I've always found it really useful to try and enforce,
to your point, the right people getting involved is trying to come up with almost an impact
assessment at the start of the project that takes into account the data aspects, being
(30:20):
the data space, how often Kate, where you involved at the tail end of a project, hey,
look, we need some reporting done. We're about to go live next week. Can you get that done
for us? And then us data people are set up for failure because, right, hold on, we had
no input into the models of the data to start off with. Likewise, so that's the data of
(30:43):
common example that we've faced into the reporting gets thrown over the fence and we're set up
for failure.
Yeah, that happens quite a bit.
There's almost the data and when, again, in previous senior roles, I literally had every
project used to have to take into account. Now, I'd say data by design, privacy by design,
(31:05):
security by design. And now it's like, hey, ethics and AI by design needs to literally
on a checklist go, hey, are these
It needs to be inserted in procurement process. So you need to be right there from the time
they're acquiring or building the system.
Yes, exactly.
That was one of the big wins that I had at the university was finally getting cyber data
(31:28):
governance data and analytics, record keeping and privacy all in into the procurement process
so that we had checklists that vendors needed to tick off against. It was a really big shift
in our acquisition process because we students, people bring their stinky systems. It's like,
(31:48):
oh my God, we would never let you buy that.
Well, here's another you touch upon an amazing point, actually, and a significant risk with
board members and companies are not thinking about right now. With the FOMO of implementing
AI, third party vendor risk is significant. How do you know that the vendor you're working
(32:11):
with isn't going to use an AI tool that potentially leaks information and leaks sense of confidential
information that should be to your point, you raise the perfect point cake, which is
that procurement standpoint needs to literally what are your security standards? Are you ISO
27001 certified? Are you now with AI management systems 47001 certified?
(32:37):
Increasingly too, because you're buying software as a service quite often now, they're using
third party products. So you're getting this daisy chain of third party risk where you're
buying a system and they're buying a system and they're buying a system. So you don't
know the provenance of all of these elements.
(32:58):
And that's what our security colleagues are challenged with, right? Because as a CDO, you
go, hey, we've got this platform, this is in the marketplace. Great. We're assume it's
all secure and that's on the marketplace. How can you be 100% sure? And you can literally
have someone with enough admin access go, oh, on XYZ data platform marketplace, let's
(33:20):
get this.
We've already had significant supply chain attacks like SolarWinds and there are others.
And we know that there is malware that has been embedded in systems that we regularly
download and use. So it is a real risk. So the risk landscape for the business has increased
(33:43):
markedly as a result of AI. It was already increasing with all the cyber risks, but now
the AI risk is adding to that. So it's an interesting time to be in detail.
It's interesting. It's challenging, but there's opportunity, right? Again, back to the fractional
leader standpoint, you can imagine small to medium sized organization, even large organizations
(34:08):
who again, the CDO may be too busy to potentially consider these aspects, although they've gone
up, they've got to build their strategy where we've got the experience to come in and go,
hey, look, you need to consider this aspect. It's not been fully understood or the board
(34:30):
needs to get a view of this or the security teams, they get a view of this and manage
those risks appropriately. It's a very interesting time. I'd say what chat GPT, AI has been around
for about 60 years. All they've done is democratize AI for all, right?
Realistically, because I often use the timeline to explain it to people. And like the concept
(34:56):
of AI has been around for 60 years, but realistically, machine learning dropped in 1997, deep learning
dropped in January 2021 and a genetic AI 2024. So a lot of this technology, apart from machine
learning, is actually still quite new. And we haven't had all of the migration of the
(35:19):
knowledge of how to use it properly and safely for fun and profit in the professionals because
they're just busy doing their jobs. So they're just trying to keep up. And a lot of people
are deploying AI now without thinking about it, which is one of the reasons I'm a huge
proponent of the Open Data Institute's one page data ethics canvas, because it's a bunch
(35:44):
of questions on a page that you can just huddle around and think about it for a moment before
you launch into your AI initiative, which is a really good idea to do.
And, well, exactly. I think, and to that point, it's an interesting one there. What also makes
it challenging? There are so many now frameworks, approaches, standards, rules, legislation now
(36:12):
sort of coming out, right? So it's, it does make it challenging.
You operate multiple jurisdictions.
Exactly, right? It's like when GDPR came out, I remember everyone scrambling then. But again,
what the EU just rolled out some new regulation around AI and it's getting you head around.
(36:33):
What serious obligations on organizations and training of people and stuff?
Exactly. Well, they made, they made it look if I understand it right, AI literacy has
to, is a must. So there you go. So it's not just something we're saying has to be done.
(36:53):
The Europeans have mandated it.
Yeah. Well, you know, it's interesting and it'll be interesting with the bifurcation
to the world, you know, where the US probably won't be interested so much in regulating
anything for the next civil war, but the EU, China, Singapore, UK potentially still want
to actually regulate. Australia is regulating, you know, so we've got the safety standard,
(37:18):
which will eventually turn into legislation. So the regulatory landscape for all of us
is going to be more complex. So that's why you need fractions to see the other.
Fine. Like, I think we need fantastic people like yourself, Kate, you know, running there's
an education and doing the great work that you do because like anything, tone at the
(37:38):
top is key. And the board from the university, the Senate, that tone at the top, pushed down
to the rest of the organization is so, so important, right?
And the Senate, if the board of directors don't really get it sufficiently, I'm not saying
they have to be experts, but they need to know sufficient about a good enough amount
(38:02):
about AI and its risks and how to manage it, or the right questions to ask as a fellow
board member, like the questions are asked to manage that and hold the CDO and the CISO
and the CEO to account to make sure that those get rolling with the appropriate funding.
Now, they can't put on the small, can only rag. Now, it needs, it needs to be done. So
(38:28):
it needs to be done otherwise. And unfortunately, I perceive more and more incidences coming
up in the near future. And yeah, really, really well made point that board directors are important.
They have an important role, so they need to get their heads around AI and how to govern
it. So that's something to be thinking about in this space. Thanks so much for your time,
(38:52):
Michael. Always a pleasure to chat with you. You're welcome again. Pleasure as always.