All Episodes

January 21, 2025 • 32 mins

Navigating AI Governance and Compliance

Patrick Sullivan is Vice President of Strategy and Innovation at A-LIGN and an expert in cybersecurity and AI compliance with over 25 years of experience.

Patrick shares his career journey, discusses his passion for educating executives and directors on effective governance, and explains the critical role of management systems like ISO 42001 in AI compliance.

We discuss the complexities of AI governance, risk assessment, and the importance of clear organizational context.

Patrick also highlights the challenges and benefits of AI assurance and offers insights into the changing landscape of AI standards and regulations.

Chapter Markers

00:00 Introduction

00:23 Patrick's Career Journey

02:31 Focus on AI Governance

04:19 Importance of Education and Internal Training

08:08 Involvement in Industry Associations

14:13 AI Standards and Governance

20:06 Challenges with preparing for AI Certification

28:04 Future of AI Assurance

About this podcast

A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI.

Hosted by Yusuf Moolla.
Produced by Risk Insights (riskinsights.com.au).

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Yusuf (00:00):
Today we have a special guest.
Patrick Sullivan is vicepresident of strategy
and innovation at A-Lign.
Patrick is a seasoned expertin cybersecurity compliance
and AI compliance with over25 years of experience.
In it security and assurance.
Welcome Patrick.

Patrick (00:20):
Yusuf, thank you so much for having me.
Grateful to be here.

Yusuf (00:24):
Can we start by just talking a bit, about
your background and howyou landed at A-Lign?

Patrick (00:31):
Oh goodness, Yusuf.
I'm not sure wehave enough time.
When I talk about this, I referto a Joseph Campbell quote.
I don't know if you'rea Campbell fan or not,
but Campbell has thisquote follow your bliss.
If you follow your bliss,doors will open where before
there were only walls.
And Yusuf, honestly, that'sthe story of my career.

(00:53):
I started out.
Studying electronics ofall things in the mid 90s.
That experience opened myeyes to what was going on in
IT at the time, which was nota new field, but considering
where we are now, a relativelyinfant field, if you will.
So I transitioned into IT.
That led to transitionsinto network engineering,

(01:16):
transitions into cyber security.
Ultimately, about sevenyears ago, I had recognized
that my real passion wasn'tnecessarily working with the
technical components anymore.
I really love teaching andthere is no better way to teach
than to position yourself inbetween the decision makers in
the business and the technicalexperts that are implementing

(01:39):
the things that need to beimplemented to actually offer
a level of security or controlto the business to mitigate and
manage risk in that business.
And so, too many words to say,I transition to the role that
I currently have in Align,which is really, uh, one of
working to help the marketthink differently about what

(02:01):
is and is not important.
Really my responsibility isto ensure that executives and
corporate directors aren'twasting their time chasing
things that are inconsequential.
But we're really focusing onthose areas of leverage, where
for the minimum input theycan derive maximum output.

Yusuf (02:20):
And what would that encompass?
So the scale of A Line'sactivities is quite broad.
Where do you findyourself playing the most?

Patrick (02:31):
Yusuf, today I would be lying if I did not
say it mainly centers aroundAI, AI governance, and to
some extent AI security.
I think there's still a littlebit of confusion in the overlap
between AI security and AIgovernance, but mostly 95
percent of my conversationscenter on governance.
And then structures thatsupport building good

(02:53):
governance infrastructuresinside organizations.

Yusuf (02:57):
Okay.
And we talk AI security.
We're talking about thesecurity of AI as opposed
to AI for security.

Patrick (03:03):
Correct?
Absolutely.
Right.
those two things aredifferent perspectives.
but yes, yes.

Yusuf (03:10):
what exactly is it that you do in your role at AI?
And I know you're very prolificat producing, information and
guidance on various standards.
And you've got several rolesthat we'll talk about later,
but what does that looklike for you in terms of,
shifting that that perspectivethat people might have?

Patrick (03:28):
Goodness.
It really is about educationand, and I know I said
that earlier, uh, I reallywant come back to that.
I mean, I think in orderfor executives and directors
to make the best decisions.
They have to have the bestinformation and context of
their organization at hand.
You know, it really is, asa knowledge leader, which
is sometimes what we referto my role internally, it

(03:51):
really is my responsibility,my obligation, to ensure that
the market understands whatthe obstacles are in its way.
So that they can make gooddecisions about how to either
go over or go around thoseobstacles in a meaningful
way while staying on purposeand while continuing to work
toward the mission of theorganization, not becoming

(04:12):
completely distracted, whichwe do see some businesses
doing today, unfortunately,

Yusuf (04:19):
And would that translate into internal education as well?
So, A line staff and what theyneed to do for your clients?

Patrick (04:28):
without question, without question, prime example
of many of your listeners arelikely aware of ISO's newer
standard on AI management, ISO42001, um, As an organization
that performs certificationassessments against management
system standards, aligned asa certification body, I'll
just refer to us as a CB.

(04:49):
That's the common vernacular.
But to become a CB necessarilymeans that everyone in
our organization has tounderstand our obligations
as a certification body.
And then, more importantly, oursales team has to understand
what these products andservices associated with

(05:10):
this new standard look like,so that as they interface
with the market, we're notmisleading, misrepresenting,
in any way adding confusionto what's already a process
that really is loaded withfear, uncertainty, doubt.
FUD is a very big part ofany new standards change.
or any new regulatory change.

(05:31):
And so in many ways, througheducation, we hope to minimize
that FUD and maximize theconfidence that directors
and executives have in makingdecisions about how to proceed.

Yusuf (05:43):
Okay, And so reducing some of the anxiety that might
go with trying to keep upwith all of these standards.

Patrick (05:50):
Where do I start?
There's so much.
I feel like I haveto boil the ocean.
It's all confusing.
These are all new words.
I want to focus on my businessand creating the products
or services that we create.
I don't want to dealwith this, but I have to.
That being said, how can wehelp those individuals approach
this process in as direct andmeaningful a way as possible?

(06:13):
That really is, theburden that I've taken on.

Yusuf (06:16):
Align being a certification body.
Do you often find yourselfinvolved in, pre audit type
consultation services tohelp people get to where
they need to as well?

Patrick (06:27):
So yes, but no.
Yes, but no.
As a certification body,we have very strict
guidelines around maintainingimpartiality and independence.
And so we can't advise, tomaintain our status in good
standing as an auditor, wecannot also advise, but we
do have very, very strongpartnerships with advisory
firms that can step inand do the advisory, the

(06:50):
pre work that's necessary.
to help organizationsprepare to withstand an
audit delivered by us.
That said, we can performreadiness assessments, which
are essentially assessmentsoutside the bounds of the
certification assessment.
The results of those readinessassessments, however, are
simply our perspective onwhether or not an AIMS,

(07:15):
an ISMS, a PIMS, meets theintent of the standard.
We can offer no guidanceor no recommendation on
corrective action withoutviolating our impartiality.

Yusuf (07:28):
And of course, Align is well known.
You've got customersall over the world.

Patrick (07:31):
We do.

Yusuf (07:32):
I know here in Australia, we see your reports come up
every now and then for, ourorganizations who rely on
various service providers that,you provide certifications to.

Patrick (07:43):
Yes, we operate globally.
We do, and in our pre call,I think I mentioned this,
we've got, of course, we'reheadquartered here in the
States, in Tampa, Florida.
We have offices in Ireland,in India, in Panama, and to
my corporate marketing team, Iapologize for not remembering
all of our locations.

(08:04):
But no, we do have aglobal footprint, so we
operate internationally.

Yusuf (08:08):
Excellent.
All right.
So I want to touch quicklyon some of the associations
that you're involved in.
And I know you and Imet through the IAAA.
So the,

Patrick (08:18):
International Association of
Algorithmic Auditors.

Yusuf (08:22):
there we

Patrick (08:23):
There we go.

Yusuf (08:23):
I couldn't quite get it for a second.

Patrick (08:25):
That's funny how that works.
Yeah.
And the harder youtry to remember the
further away it creeps.

Yusuf (08:31):
That's right, yeah.
Anyway, also, you're involvedin IAAA, and then you're also
involved quite interestinglyand importantly in the JDC1 SC42
initiative, creating standards,

Patrick (08:45):
Yes.
Yes.
And I've recently taken ona role with SC27 as well.
So the information security,subcommittee inside ISO.

Yusuf (08:53):
what, what inspired that sort of
extracurricular activity?
you've got a big day job asVP of strategy and innovation.
you've got a growing familyas we spoke about earlier,
and these extra initiativesor extra activities.
really would take up,I imagine a fair amount
of time outside of that.

(09:13):
So there must have been somepretty strong inspiration
drive to get involved in those.
What can you talk abit more about that?

Patrick (09:22):
Sure, and I think there's more of an indirect
answer to that Yusuf.
Maybe 20 years ago now, Iwas at a place in my career.
Actually it was 20 years ago.
It was 2006, so notquite, but close.
I was at a place in mycareer where I had taken on
a really big job, in helpingstart a medical school.
It was so cool.

(09:43):
I had so much fun.
I also failed miserably.
Because I did not engage inthat job as my full self.
And so too many words tosay through that failure.
I found a book, calledthe On Purpose Person.
We're in, in the book, you,you essentially walk through
these exercises to be ablefor yourself to finish the

(10:07):
statement, I exist to serveso that dot, dot, dot.
And so I really took time toget my head right about what's
most meaningful and importantfor me, what my purpose is.
And Yusuf, I settled on, andI still to this day absolutely
believe that I exist toserve so that others can

(10:28):
reach their full potential.
That's it.
And so everything I do,whether it's technical, non
technical, AI related, nonAI related, my family work,
whatever, Everything I do isabout ensuring that I stay
on purpose and helping othersreach their full potential.
Now, as I look at whatwe're doing with AI in

(10:48):
our various markets today,there is a desperate need
in our market for education.
There are so manystandards out here, so many
organizations, some withreally, really positive intent.
But very, very poorexecution ability.
there are so many data pointsnow, that I recognize in our

(11:11):
field, in our market, there'sa desperate need for help.
And so, I've taken on theseextracurricular activities,
not necessarily for any otherreason than I absolutely believe
it's the right thing to do.
Because we have a market,we have a community, full of
not just experts, But peoplethat are incredible human

(11:31):
beings looking to createsomething better for humanity.
And so I see my role, thoughsmall, as being one of ensuring
that people have the best datathey can have, the best framing,
the best perspective they canhave to make good decisions
about where to go next.
So it is a burden,but I love it, Yusuf.
I absolutely love it.

Yusuf (11:53):
And, and thank you for that.
I think one of the things likewhen I think about standards
and you mentioned it just there,there's a lot of work going on.
one of the things that Istruggle with and I know that
many people that I speak tostruggle with is how do these
things all fit together?
Yeah.
So, there's various standardsand I know that you do
spend quite a bit of timethinking about that and

(12:14):
putting together informationor material about that.
But at a high level, how doyou think about how all the
standards work together and whatthat means for implementation
for somebody that wantsbetter AI governance, let's

Patrick (12:31):
Yeah, yeah.
And I think first, and thismay be the disappointing
statement, but I don't thinkall the standards work together.
I think some of the standards,some of the regulations have
a very specific view thatcan be complemented by other
standards, but don't necessarilysupport, if that makes sense.
and Yusuf, honestly, I don'tknow if you remember Balance

(12:51):
Scorecard, the whole Nortonand Kaplan approach to taking
these very soft virtues orgoals and slowly working
through a process of connectingthose virtues and goals.
That's what's missing in ourfield, until you begin looking
at what ISO has done withtheir management systems.

(13:14):
In ISO management systems,whether it's information
security, or since we're talkingabout AI, The AIMS, what ISO
has offered is an extensiblestructure, a set of best
practices that any organizationcan take on and bend and twist
and extend and cut and reallymold to form the foundation of

(13:37):
what allows that organizationto then bolt on in a very
meaningful way, in a verypractical way, the requirements
that they need to meet externalstakeholder obligations.
So too many words to say, thoughthere is no Rosetta Stone today
for the overlap of the variousregulations and standards,
in my mind, beginning with anISO management system, whether

(14:00):
it's the ISMS or AIMS, or AIMS.
puts organizations in thebest position to reasonably
and responsibly extend andgrow to meet the needs of
those external obligations.
In terms of in particularand AI standards.
Of course, 42001 is oneof the major standards

(14:22):
that we're working on.
Everybody's talking aboutwhat exactly is it and
why is it significant fororganizations that are
developing or using AI systems?
So it again is a governanceand management system.
So it's a framework.
Take a step back veryquickly governance.
So there's a lot of differentways to define governance.

(14:44):
I really like the definitionthat ISACA has offered, which is
governance is a value creationprocess That centers around
creating desired outcomesat optimized risk and cost.
That said, we know thethree variables that we
really have to focus onif we're going to ensure
creating value or governinginside of our organization.

(15:06):
That's what do we really want,what risk are we prepared to
take on, and what cost are weprepared to take on to do so.
And so as we think aboutthe management system, it
is a structure that walksus through a process.
of thinking about, firstof all, who are we?
What is the contextof our organization?
where do we sitin our ecosystem?

(15:28):
Who are our internalstakeholders,
external stakeholders?
What is this thing thatwe say we're even doing?
So we have to, as an entity,define those things very
clearly, very articulately,so that we can move on to the
next steps of the process.
Which is get management buy in.
You know, once we understandwho we are, what it is we

(15:49):
claim to do, the next thingall management systems require
is that management buy in andactually commit to ensuring
that not only resourcesare available to execute
against this vision, but alsothat appropriate planning,
appropriate commitments are madeto ensure that those resources
can do what they need to do.
So, what we see through themanagement system structure

(16:12):
itself is a series of whatare referred to as clauses.
These are justcommitments or steps.
They're gatechecks, if you will.
I need to do this.
Once this is done,I can move to next.
Once that's done,I'll move to next.
But we walk throughthis process of defining
our context, of gettingmanagement and leadership
buy in, of really planning.

(16:33):
You know, it's one thingto say that we want to do
something, but if we beginexecuting before we've planned
any changes, we're not goingto create what we want.
The management systemrecognizes this and forces
us to operationalize planningso that we can then allocate
appropriate resources.
We can provideappropriate support.
We can operationalize.

(16:53):
And then on the backend,the management system itself
is based on ideas offeredto us by W Edwards Damning.
This whole idea of if wehave a system that we want to
improve over time, we reallyneed to do a few things over
and over and over again.
We've got to plan ourefforts, we've got to
execute, we've got to do thosethings we said we would do.

(17:15):
Periodically, we have tocheck to be sure we're
still on track to createthe outcomes that we want.
And if we're not, wehave to remediate.
We have to act in someway, shape, or form
to course correct.
And so we create thiscycle of plan, do, check,
act that's built into themanagement system itself.
So again, in my mind, itbecomes a very easy playbook.

(17:39):
It's almost, in meditation,what do they call it, a mantra?
The management system almostbecomes a mantra for your
organization to set the tone,to set the expectation for how
we'll think about changes, howwe'll execute those changes, and
then how we'll ensure over timein a consistent, meaningful way
that those changes are actuallycreating the outcomes that we

(18:01):
intended for them to create.
Now, with that as the backdrop,plug in Regulation A, plug in
Regulation B, Plug in RegulationC, and suddenly you have this
operating system inside yourbusiness that allows you to
not ensure success, but putyou in the best position to
actually create what it isyou say you want to create.

Yusuf (18:23):
Okay.
So it's translating intentioninto action specifically for AI.

Patrick (18:29):
You said that so much better than I did Yusuf
in so many fewer words.
Yes, yes.

Yusuf (18:34):
Okay.
And why would an organizationwant to adopt 42001 and
then certify against itand particularly certify

Patrick (18:45):
Yeah, so two different things.
Oh, Three different things here.
So first of all, why, whyhopefully we articulated
just a moment ago, all thepositive reasons, all the
benefits for an organization,why implement that is, why
certify, we see two differentthings happening right now.
We see market pressure onorganizations, particularly
here in the States, to offerassurance to their customers,

(19:09):
to those in their valuechain, that they are using
AI, that they are developingAI in a responsible way.
We also see regulatory pressure.
Not so much here in theStates, but overseas,
in Europe particularly.
I think many folks arefamiliar with that.
The EU AI Act.
Regardless, organizationsare developing and using AI.

(19:29):
And organizations need tohave a way to offer assurance
either to regulators or totheir individual market that
they are doing the rightthings in that development.
The right things, toprotect against bias.
To protect against harm.
To protect against manipulation.
All those things that we needto ensure that we protect
the community against.

(19:50):
The AI management system,the governance system,
allows us to have an externalindependent entity audit us,
audit our activities, andoffer third party validation
that yes, in fact, we aredoing those things correctly.

Yusuf (20:06):
In implementing those and as you go through certification
processes, what's the one, twoor three most common challenges
that your clients face?
And you guys have been doingthis for quite some time now,
so you will have seen somepatterns emerge, I imagine.
What, What are thosecommon challenges?

Patrick (20:27):
honestly, one of the biggest, and challenge
is probably the right wayto frame this, one of the
biggest challenges I've seenfor organizations is that they
just don't know who they are.
You know, I mentionedthe importance of
scoping and setting thecontext, understanding
who you really are.
That is the key to anythingelse that happens inside

(20:47):
a management system.
Many organizations havedeveloped a habit of running
so quickly and being soopportunistic that they
struggle to really stepback and articulate who
it is they really are.
And what's most importantto them as an organization.
So that's challenge numberone, scoping and context,
which without doubt are thekeystones that they are the

(21:10):
most important part of buildingyour management system.
other areas of concern,risk, believe it or not,
we see organizations thatare really, really good.
about risk from anenterprise perspective.
But as we start thinking aboutour AI systems, we have to
think about risk in contextspecifically of those systems.

(21:31):
What risks are we introducing?
What vulnerabilities arewe exposing to the world?
You know, what is thelikelihood of someone
actually compromising?
What it is we've put out there,that becomes a very difficult
thing for a lot of organizationsbecause we're asking them to
apply tools and processes thatthey're already comfortable
with in a different context.

(21:53):
And then the third biggestthing, I think, really ties
back to scoping and context,and that is, impact assessment.
You know, we see a lot oforganizations that have
built up really, reallystrong privacy programs.
And therefore, they'revery familiar with
data protection impactassessments and the concept.

(22:13):
As we think about AI impactassessments, however, The impact
to stakeholders absolutelyhinges on our ability to
articulate cleanly our context.
Who are our internaland stakeholders?
So we're asking organizationstoday that are struggling
with basics to really performcomplicated maths in a

(22:35):
way that they're just notprepared to be successful.
And so Yusuf, you didn'task, but I will say as
we think about ways fororganizations to better prepare?
I always encourage, alwaysadvise organizations that
are looking at starting thisjourney to partner with an
advisory firm, with peoplethat have walked this path

(22:55):
before and already have anunderstanding of where the
landmines are and where theobstacles are that need to
be addressed head on so thateverything else can be easier.

Yusuf (23:06):
So in thinking about with the various risk
scenarios that might exist,particularly as AI systems are
developed in different waysby different organizations.
What are you seeing interms of building your own
versus buying something thatsomebody else has created?
it, is it, Is it moreorganizations building, which

(23:28):
means that they have to dotheir own risk assessments or
risk assessments from scratch,or is there more focus on
buying, which means that theyare able to do Leverage the
developers thinking aroundrisk and then extend it
for their own organization.

Patrick (23:47):
not a direct answer here, Yusuf, but I will
say based on our market,because we work with so many.
Entities that sit in themiddle, SAS providers, we'll
use SAS providers as an example.
Those organizations arevery, very nimble, and
in being nimble, theyrecognize that you don't
want to recreate the wheel.

(24:08):
So many of our customerorganizations are consuming
upstream services.
Open AI, Anthropic,Google, you name it.
So they're consuming upstreamservices that then they
apply their development to,to create the service that's
sold to their customers.
And so we're, we're honestlyseeing a little bit of both.

(24:28):
It's not salt water, it's notfresh water, very much we're
dealing with brackish water.
And the cool thing is, inclarifying the context of
your organization, the ISOmanagement system actually
has roles created to help youeasily label your position,
to make it very, very plain,not just for you, but also
for your customer base.

(24:49):
How you're participatingin the overall ecosystem.
So I'm not creatingthe foundation models.
I'm consuming a foundationmodel and then applying
my own, development, myown practices to create
something very specific thanI then turn around and sell.
But we can easily documentthat chain and easily audit
and then report on it.
As we think about risk to yourpoint, it can be difficult to

(25:13):
assess risk in those situations.
But we do the best we canbased on what we see, and what
we know is associated withthe individual customer SDLC.
That really is wherethe focus, rests.

Yusuf (25:26):
Talking about some of those upstream providers,
are you seeing more AIimplementation nowadays
anyway, along the lines ofThe use of large language
models and generative AIor more traditional models,
or is that shifting?

Patrick (25:43):
I think it is very much vertical specific.
As an example, I knowmany of your listeners
are financial services.
You know, as I think about frauddetection, I'm not seeing a ton
of LLM use for fraud detection.
We see more traditional machinelearning models associated
with some of those activities.

(26:05):
But that doesn't mean thatorganizations haven't created
new and interesting ways tointeract with their customers.
Using large language models.

Yusuf (26:14):
And suspect we might see more and more of that

Patrick (26:18):
Well, and, and my, and Yusuf, I'm not sure if
you share this opinion, butwith agentic AI, what we
see happening with agents.
That, without question,is the next frontier,
and it's already here.
So that, that is somethingthat I think will catch
many organizations offguard if they're not ready.

Yusuf (26:36):
What does that mean for risk though?
so we're talking abouttraditional models, it's
hard to think about risk.

Patrick (26:42):
Yes,

Yusuf (26:42):
you introduce generative AI, becomes a bit
harder to think about risk.
And now we're going agentic,like, what, what's actually
happening there in termsof the ability to identify
those risk scenarios and justknow what your risks are.

Patrick (26:57):
well, and so I can't answer that question because
there are so many othersmarter people working on this.
This is a recognized issuein the industry today.
What I will say for thoselistening, to outline the
problem, as Yusuf laidout, with LLMs, we kind of
understand what's happening.
We know a little bit ofthe data is always going to
be obfuscated, so we mightnot necessarily be have

(27:19):
full understanding or fullexplainability, but we kind of
have a concept of what risksare associated with LLM use.
Just imagine your LLM, ifsomehow you were able to
create a persona for that LLM,give that persona a goal, a
target, a name, and then turnit loose, let it run wild.

(27:40):
That is agentic AI, and sonecessarily we're introducing
the potential for so manyemergent properties and
so many emergent risks.
We're still not sure how farin the future that's going to
go but I think we're going tohave to figure out how we're
going to best proceed with this.
I don't think that we'refully aware of the full
extent of the risks that we'reintroducing in our system Yusuf.
Nonetheless, again, a lot ofreally really smart people

(28:02):
are working on addressingthis problem today.

Yusuf (28:04):
So looking ahead to the rest of 2025, because we've
almost, almost through the firstwhat do you see coming over the
next year in terms of know, thebig things that you think will
happen in terms of AI assurance?
that you're startingto prepare for now.

Patrick (28:22):
Agentic AI, first and foremost, that is something
that I think is keeping a lotof people up at night right now.
you know, honestly, oneof the things that I don't
know where this will land.
And I don't know how much yourlisteners are involved in this
process, but especially outof the EU, with harmonized
standards, what we're sayingright now is a lot of positive

(28:43):
intent to clean up standards,regulations, and harmonize
them across a common theme.
What we're also saying is thatbecause people systems are
generally fraught with politicsand ego and all those things,
that that process is broken.
And so I do have a concern.

(29:04):
I think we all had a littlebit of a pressure about
where we'll land with notjust global regulation.
Well, I will say with globalregulation around AI governance
in 2025 but that that'ssomething that I'm very, very
keen on tracking because Ithink that's going to have an
impact on more organizations.
Now, why does the EU havea conformity standard
that's different fromwhat we currently realize?

(29:25):
As an example for yourlisteners that do have a
presence in the EU, you'revery likely bound to the
requirements of the EU AI Act.
I can say that, were you toask me what the conformity
standard was to show thatyou're in conformance, I would
have to tell you there's notone, and I don't expect there
to be one for you anytimesoon, which is a very, very

(29:45):
uncomfortable position to be in.
and even more reason we'reencouraging organizations to So
again, it's an on site review.
I'm going to put this togetherin the next video for you.
Thanks.
Organizations around the worldwill be positioned to pivot a
little bit rather than recreatefrom scratch with no time.

Yusuf (30:11):
Yeah, that makes sense.
I think whether regulationexists for you or not,
something like the EOI Actis quite, as, as legislation
is quite extensive anyway.
And so if you could find a wayto interpret that and start
preparing for conformance,then really any residual

(30:33):
is just a gap that can befixed as opposed to starting
from scratch is what we say.

Patrick (30:37):
Right.
And because management systemsare built to be extended,
effectively all you're doing isexecuting the management system
appropriately at that point.
Yeah, so lots of wins.
Lots of wins for organizationsthat choose to pursue it.

Yusuf (30:51):
Choose to do the right thing.
And then finally, as we get toclose to the top of the hour,
where can our listeners learnmore about airline services,
your work in air governance, andpotentially connect with you?

Patrick (31:05):
Feel free to connect with me on LinkedIn.
Yusuf, I'm not sure ifwe can provide links.

Yusuf (31:09):
Yes, absolutely.

Patrick (31:10):
Happy to provide a link.
And then again, Iserve with Align.
If we could provide a linkto Align as well, for those
of you that are in needof third party compliance,
attestation, certification,authorization, we're one of
the few companies in the worldthat effectively handle it all.
I will say we don't do a tonwith regulation today, but

(31:31):
third party certificationand attestation, we've
developed an expertise.
That I'm really, really proud ofto, to be perfectly transparent.

Yusuf (31:41):
Excellent.
Patrick, thank you forjoining us on the show today.

Patrick (31:44):
Yusuf, thank you so much for having me.
It's great, great speakingwith you on this venue.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

The Breakfast Club

The Breakfast Club

The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy And Charlamagne Tha God!

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.