Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:06):
Good morning and good afternoon for thoseof you who are joining across the world.
We are here to, uh, talk toyou about, uh, EU AI Act.
I'm Krishna Gade.
I'm the Co-founder and CEO of Fiddler.
I'll be your host today.
We have a very special guest on today's AIExplained, and that is Kevin Schawinski,
uh, CEO and Co-founder of Modulos AG.
(00:30):
Um, Welcome, Kevin.
It's a pleasure to be here, Krishna.
Awesome.
Uh, Kevin is the Co-founder and CEO ofModulos, uh, where he leads the mission
to develop and operate AI products andservices in a newly regulated era through
their Modulos AI governance platform.
Uh, Kevin, uh, maybe before we go intoit, would you like to Say something
(00:54):
more about yourself, you know,how you started the company, what,
what you do, uh, for our audience.
Yeah, uh, uh, uh, it's a pleasureto, uh, to join this forum here.
So, um, I'm actually by training anastrophysicist and, and I got into AI
and machine learning something like10 years ago when we thought it was a
bubble, but of course it was barely thebeginning of, of, of what it is today.
(01:18):
Um, and, uh, I, I left academia tofound Modulos first thinking about.
Um, Responsible AI, TrustworthyAI, and finding methods to help
people build better models.
And then about three years ago, we saw theearly draft of what is now the EU AI Act.
And we read this document, andwe tried to understand what this
(01:43):
implied, if it became a law.
And we realized this is a fantasticopportunity to build a product for an
era that's very different from the onewe're used to, one where AI is a product.
A regulated product, and it's a regulatedactivity, and that's how the Modulos
AI governance platform was born.
(02:05):
Awesome.
That's actually a great segueinto our current topic, Kevin.
This has been, uh, last few weeks hasbeen a big week for AI regulation,
but in California we have hadGavin Newsom pass a bunch of AI
regulations, including one AI Act.
Uh, so maybe, uh, let's start with like.
(02:26):
So, what is EU AI Act?
You know, could you tell ouraudience what it is and the details?
So, the best mental frame forthinking about the AI Act is GDPR.
In 2016, the EU decided to set,effectively, global standards for
(02:48):
data privacy and data protection.
Uh, and even though it's aEuropean regulation, it, uh, had
echoes, uh, around the world.
In the United States and othercountries, we had to start thinking
and, and, and work seriously ondata protection and, and privacy.
The EU saw this and decidedthat one of the next things they
wanted to do is regulate it.
(03:10):
AI.
So, um, GDPR covers the data side, andthey wanted to cover the, the algorithm
side, and we can go, uh, uh, further,uh, later on in what the EU really means
by AI, because it's not what we havein mind when, when we hear the word.
Um, The EU did sort of early studies onhow they would want to go about this, and
(03:32):
they settled on two decisions that arefundamental, but that were not obvious.
I don't think thesewere obvious decisions.
And now when we talk about AIregulation in whatever country or
forum we're talking about, these twodecisions are more or less assumed
to be the way we do AI regulation.
So the first one is That we don'tregulate the technology, so the AI Act
(03:56):
doesn't tell you how many layers you'reallowed to have in your neural network,
or what your F1 score needs to be.
It regulates the product thatcontains AI, and actually doesn't
care what kind of AI it is.
It could be the latest LLM, or it couldbe very simple logistic regression.
You've now built an AIproduct, and so you're covered.
That's decision number one.
(04:17):
And then decision number two wasto say the obligations you have,
the things that you need to do inorder to have a compliant product.
scale with the risk of the application.
So the same AI model, um, whenit's used to decide whether you're,
you, you get credit or not, is muchriskier than, than if it decides
(04:41):
whether a certain email is spam.
And so this risk based approachis something that's being, um,
copied by basically Uh, everyone.
And then as a final, uh, point that Ithink is interesting and is not well known
about the AI Act, is that it, it has acertain structure, and that structure is
inherited from the regulation that many ofthe writers worked on before the AI Act,
(05:07):
and that's the medical device regulation.
So, the regulation that if you wantto bring an x ray machine or, or an
insulin pump to market in Europe, um,That's the medical device regulation,
and the AI Act has a similar structure.
So when you think about, okay,what is this AI Act all about?
How does it work?
Think about the steps you wouldtake, um, if you wanted to bring a
(05:29):
medical product to market, and thena lot of things fall into place.
Absolutely.
Yeah.
So I think, you know, uh, a lot, you know,uh, one of the things that happened this
week is like the governor of Californiavetoed s sp 1 10 47, although there
were some other, you know, regulationslike SB 9 42 were passed, right?
So there's always this concern whenpeople talk about AI regulations,
(05:51):
whether it'll add bottlenecks, uh, toAI development, and there's always like
the crowd is like divided upon that.
So, can you tell us whatEU AI Act will do here?
What's your view?
Will it help open up the AI marketin the EU or add further bottlenecks?
So the EU has been broadly criticizedfor stifling, uh, innovation
(06:13):
with the AI Act, you know, the U.
S.
invents and the EU regulates.
And I think that in this situation isslightly unfair, because in the EU,
uh, there are now clear rules and cleartimelines on what you're supposed to do
in order to have a product on the market.
Um, in the US, where of coursemost of the AI innovation happens,
(06:36):
we have a much more piecemeal,haphazard approach to AI regulation.
We have the executive order fromPresident Biden, basically directing
the agencies to draw up standards, andactually already start enforcement.
So in Europe, we stillhave 22 months left.
In the United States, theregulatory agencies are already
bringing cases to court.
(06:57):
And of course, because there's no federallaw in the United States, the individual
states are cooking up their own laws.
So California, of course, isworking on many of those, but most
of the other states are lookingat their own laws, and Colorado's
already passed their own AI Act.
Yeah, makes sense.
So then, like, how should a U.
(07:18):
S.
company think about the EU AI Act, youknow, how does it affect, you know, say,
a large enterprise company that may have adistribution of their products or services
in the EU, or even maybe a startupthat wants to capture the EU market?
So, uh, yeah, you had exactly theright phrase at the end, if you
(07:38):
wanted to capture the EU market.
Um, think about it again, likeGDPR, it's a European rule.
But as long as your data hasanything to do with Europeans,
even if you're a US company, youhave to start thinking about it.
So there's an extraterritorialaspect to these European laws.
This is, this is by design.
(07:58):
This is not, not a, not a mistake.
This is absolutely wanted.
And so if, if you want, if you're, ifyou're an American company, whether you're
a startup or, or whether you're a giantenterprise, uh, you will fall under the AI
Act as long as you're Product or serviceis available in the EU, and for that it
(08:19):
might be sufficient that if I'm in Germanyor if I'm in Spain and I can log on to
your website and I can procure a service,that's certainly sufficient, and there
can even be cases where If the outputfrom your system has a material effect in
the EU, even though both your company andyour customer are in the US and Canada,
(08:42):
if that output has a material effectin the EU, you might also be covered.
And so, even though
you're not located physically inthe EU, this is definitely something
you should pay attention to.
Yeah, makes sense.
So, let's actually step back a littlebit, right, uh, you know, people talk
about model governance all the time,uh, especially in the last few years.
(09:05):
As you articulated, there are manygovernments rolling out guidelines and
many companies advertise themselvesas Uh, uh, you know, subscribing to
responsible AI principles, but you know,what are, uh, you know, some of the things
that people need to think about modelgovernance when it comes to generative AI?
Because I, I feel like, uh,machine learning has been a well
(09:26):
studied problem, especially whenit comes to model governance.
But what's your point of view whenit comes to generative AI, which
is, you know, different from machinelearning and how does one think about
model governance for generative AI?
So the, the, the approach that theAI Act takes, and that a lot of these
regulatory frameworks take, is, isultimately it's about consumer protection.
(09:49):
And so, what's inside the boxis, is less important than,
than, than what the box does.
That said, GenAI brings its ownchallenges to dealing with questions like
transparency, like traceability, and ofcourse then the challenges of bringing
GenAI to production when you thinkabout guardrails and AI cybersecurity.
(10:14):
The more recent regulatory frameworks,both the thinking in the EU, but also
in the United States, focus specificallyon what are those challenges.
Technical challenges, uh, to do withGenAI, and I think there's a lot of
thinking, there's a lot of good workgoing on trying to find best practices.
So I don't think the, the laws anytimesoon are going to tell you how, how to use
(10:38):
LLMs, but a lot of the best practices andthe technical standards are talking about,
So there's one difference inGenAI compared to traditional
machine learning, right?
In machine learning, you, you know, mostteams train their model from scratch.
You know, they have like, uh, youknow, they might use libraries
like TensorFlow, like Scikit learn.
(11:00):
Whatever, they can, they can train theirown model on their data, but with GenAI,
you have all these pre-trained models thatvendors give you, you know, Microsoft,
you know, Google, all these big players,open source, as well, there are lots
of these pre-trained models that youcould then leverage, you know, do your
own prompt engineering and fine-tuning.
Now, how do you then apply governance?
(11:21):
You know, um, What, and then first ofall, how do you even regulate, you know,
because there's this conflict betweenthe developer of the pre-trained model
and the deployer, uh, so could youjust talk about those, uh, nuances?
Sure.
So, GenAI has, uh, additionalchallenges for exactly the point
(11:44):
that you outlined here, which is thatyou're taking a pre-trained model from
someone, and then you're building yourown product, um, uh, based on that.
And the way the regulators tend to lookat it is if you're building with it,
you assume A lot of the responsibilityfor it and, and simply saying, well, I
(12:04):
took the pre-trained model from, from,from this provider, that provider,
um, is, is not going to be much of adefense if your product causes harm.
And so that means in this new, uh,regulated AI era, you should think
very carefully about who's pre-trainedmodel or who's, uh, general purpose
AI is the term the Europeans now use.
(12:26):
Uh, you're going to use the, the Europeanswant, uh, well, want required the general
purpose AI models that, that is thebig pre-trained, um, LLMs, um, to make
certain disclosures within 12 months.
So this is now in, in 10 months wherethey have to deliver amongst other things,
a very detailed model card, uh, thatought to include Um, details on what data
(12:49):
the model was trained on and how it wastrained and what safeguards it includes.
Now, if you're going to be buildingwith those models, you want that
information, um, not just from atechnical perspective, but also from
that regulatory perspective, so thatyou know exactly what you built with and
what risks you took by using the model.
(13:10):
And I think a big question is just howdetailed those disclosures will really be.
And the question that I have is whethermaybe some of the big, uh, GenAI
foundation model companies might chooseto just not make the disclosure and say,
look, it's a model not for use in the EU.
That, that might well happen.
(13:30):
Yeah.
So what you're saying is, uh, so justto give an example, if like an, I'm
enterprise customer, like a bank or anairlines company that is operating in
the EU and using one of these pre-trainedmodels, then I have to have a model card
that not only tells about how I fine tunethis model, but also how the original
model The pre-trained model was trained.
(13:53):
Yeah, because it's a productsafety, product liability approach,
not a technological approach.
So, just as you can't make a toyand just put a component in there
that you have no idea what it is,so you can't build an AI application
in your bank or in your insurancewhere you don't know what's in it.
Right, you, the saying, well,you know, I just, I got it from
(14:15):
somewhere, that's not going to cut it.
And this change, by theway, goes further, right?
Um, if you, uh, if you take one of thosepre-trained models and you build an AI
product out of it, right, you wrap it,you have an API, you license that product,
and then somebody else, Purchases it.
Um, you then pass that responsibilityalong, like you're responsible
(14:39):
and then the person or the companylicensing your product also takes
on some of the responsibility.
So there's a whole new, um, set ofliabilities along the AI supply chain
that, that will keep going.
So can you just articulate alittle bit more of what should
go into this model card?
What should I be prepared in terms of.
(14:59):
Uh, showcasing it to the potentialregulator that might ask for this
thing, you know, uh, what aresome of the aspects, uh, that I
need to think about as a customer.
So the OECD model card and dataset cardsare generally considered a very good
example of what that should look like.
(15:20):
It's, by the way, it's also thetemplate that HuggingFace uses, so
if you get the HuggingFace template,you have a pretty good start.
If you're, if you're providing one ofthose GenAI models, there's a whole
article in the AI Act that you shouldstudy, and there are some things in there
that I, even I, still have some openquestions on what that really means.
(15:42):
And I think one of the thingsthat's going to be contentious is
that you have an obligation to tellwhat data you trained on at the
level of detail so that copyrightholders can enforce their rights.
And, of course, that'sa very loaded subject.
Thank you very much.
Got it,
got it.
And so, uh, there was aninteresting question relevant
(16:04):
to what you just talked about.
You know, if it's a pre-trainedmodel, how does it comply with the
EU's right to forget, you know?
Like if you, if you, so like, youknow, how are you gonna ask the
vendor to like forget some data orlike, you know, forget some, you
know, training, training examples?
This is a fascinating question.
I've talked to quite a few,um, uh, lawyers from different
(16:26):
European countries about this.
I think there's no strongconclusion to this question.
So let me backtrack and explaina little bit what Krishna,
what you're addressing here.
So if I'm your customer, Uh, underthe, uh, GDPR, I have the right to
tell you, look, delete all my data, Idon't want you to know anything about
me anymore, and you have to comply.
(16:46):
So now, if I've trained a hundredmillion dollar or a billion dollar
foundation model on that data,and I come to you and say, You've
got to forget everything about me.
There's
it's not clear how you would do that.
I know that some companies, um, haveessentially used guardrails, where
(17:06):
they have, um, uh, filters duringinference, where they will try to
filter out if somebody has, um,uh, essentially pressed that right.
But the question whether I can ask youto essentially delete me from your model
weights is an open question, and we'llhave to see what the courts decide.
Right, because it's, it's like, you know,uh, even if it's a fine-tuned model,
(17:31):
uh, it's, it's, it's, it's, you know,you don't have like, uh, controls into
the original data that, that, that wasused to train the pre-trained model.
So, so it's going to be interesting howthis regulation gets, gets implemented.
Uh, now, so.
Maybe going into the little bitof more details of like EU AI,
let's like, let's walk throughsome of the practical steps, right?
(17:53):
What should an organization do toimplement, uh, transparency and
responsible AI required by the AI Act?
So for example You know, thingslike, you know, monitoring,
governance, model testing, you know,like, could you walk us through,
like, some of the practical steps?
What are the processes and,and, and things that an
organization should put in place?
(18:15):
Sure.
So, so the first thing that, that everyoneshould do, as long as they suspect
they would be in any way covered by, bythe AI Act, is make, make an inventory
of all the AI applications you have.
And if you do that, you'll besurprised by how many more And here
(18:35):
I'll just do a 10 second detour.
The definition of AI is not ascientific or technical one, it's a
legal one, and very simple algorithms,a linear regression, an if then else
statement, could, be considered AIunder these new legal definitions.
(18:55):
In fact, the U.
S.
is more practical.
They more and more use the phraseautomated decision system, and then
it's much more helpful to realizewhat we're really talking about.
So you need this inventory, andyou need to figure out how risky
those applications really are.
And one thing I would stress, thisis extremely important because
there's almost no time left.
(19:16):
Find the prohibited.
applications.
The prohibitions kick in at theend of January next year, and if
you're found to be operating aprohibited AI system, that's a 7
percent of global turnover fine.
So the prohibition that you're most likelyto be operating today are so called real
(19:38):
time biometrics and real time emotionalrecognition manipulation systems.
So the classic case here isthose HR interview softwares.
Where you basically, you know, you seemy video here and then on the side it
says like, Oh, Kevin, he's really nervousand he's probably lying right now.
Like, those are probablyillegal very, very soon.
So make sure you takethose out of commission.
(20:01):
Beyond that, um, the steps youshould take is you should have like,
um, a multi stakeholder process toset up your governance processes.
And the most important thing todo there is have an institutional
policy on how you're using AI.
This is to be a document both aboutprocesses, but also about your values.
(20:22):
Like what is it youexpect of your AI systems?
What is your, what is it that youexpect of your developers and your
users and, and your customers?
Um, then on the more technicalside, um, the AI Act expects you to
have a quality management system.
So this is, uh, analogous to thequality management system that
I'm sure many of the attendeeshave in their company already.
(20:45):
But it has to be one specifically for ai.
So, it operates slightly differently froma traditional quality management system.
And the other thing you have tohave is a risk management system.
So again, most bigger companieswill have one or more risk
management systems, but this hasto be an AI risk management system.
(21:06):
And this is where your question abouthow does it go into the technical
field, um, AI risk management means youhave to continuously monitor the risks
posed by your AI system, and If youhave evidence that the risk is above a
threshold or has increased from managedto not managed, you should act on it.
(21:27):
So that means whatever model you havein production, you should be monitoring
and making sure that the risks itposes And these could be risks to, uh,
for discrimination, or for, for, forerror, or for, um, actually in, in,
in environmental risks, and, and othertypes of risks as well, that, that you
(21:49):
constantly monitor and mitigate them.
And setting that up in practice is,it can, can be a challenge, um, but of
course it's something that connects to,Infrastructure that if you deploy AI
models that you already have becauseyou should be monitoring your models and
you should know how they're behaving.
Awesome.
So that's a great point, right?
(22:09):
So, uh, and of course we are an AImonitoring company and one of the things
that we get questions from our customerswho are regulated is what Should be my
thresholds, you know, how do I monitor it?
What thresholds would pass meregulatory oversight, you know?
And so, does EU AI act, uh, again, I knowthe answer, but I think for the audience,
(22:30):
like, would it prescribe the thresholds?
Like, say, hey, you can't have below,you know, 80 percent hallucinate,
or 20 percent hallucinations, or youcan't have, like, this much of bias
in your models and things like that.
I wish, I wish there was a niceappendix to the AI Act that
give you, like, quantitativeanswers to all those questions,
(22:52):
and of course it doesn't, right?
Um, a law is written by lawyers and byparliaments, and it has very general
language, so the AI Act will saysomething like, Your training data must
be sufficiently representative towardsthe groups or individuals affected.
So that's a nice legal phrasing, butas an engineer, as a data scientist,
(23:15):
what am I going to do with that, right?
This is a real challenge.
So the first principle I would take hereis, um, in the absence of like a long
record of court cases and decisions, um,As long as you're engaged in a good faith
effort, I think you're going to be fine.
So this means you have to have a wellmaintained risk management system
(23:37):
where you explain, look, these arethe, these are the parameters, or these
are the metrics that we're monitoring.
And we've decided, like, look, wejust don't want this fairness metric
to exceed this threshold for Reasonsthat might not be too, too deep, maybe
something you, you, you, you determineempirically or think sounds reasonable.
As long as you have a process inplace, as long as you show that you
(23:58):
adapt, I think in most cases you'regoing to do well if challenged.
I think this will change over time,and I think we'll get industry
standards, um, in specific verticals.
That's actually a good segue tothis new industry standard called
ISO 42001 that is emerged, right?
Uh, could you talk aboutwhat that standard is?
(24:21):
Uh, how, uh, you know, an organizationcan get certified for that standard?
And what does it mean to getan ISO 42001 certification?
How does it protect me froma regulatory oversight?
So 42001 is called AI management system.
And so If you're familiar withinformation security, there's ISO 27001
(24:43):
and there's also SOP2, which if yourun a software company, you have to go
get that to prove that you're dealingwith information security well, and if
you're buying software for an enterprise,right, you want to have vendors that
can prove that they're responsible andproactive in their information security.
42001 is the equivalent for AI systems.
(25:06):
It's an ISO standard, which meansit's international, which means
it's not specific to the AI Act,and it basically describes how to
set up the management system for AI.
So this includes the quality management,the risk management, the responsibilities,
the taxonomies, all the things thatyou need, generically, in order
(25:28):
to set up this management system.
Now the good thing about 42001is this is a finished standard.
It was published late last year andyou can get certified for it today.
This is actually what we as Modulos isone of the things that we help you do.
is help you build a management systemthat can be certified so that you can
(25:48):
show to your vendors, your customers,that you have a trustworthy system.
Now, is this enough forconformity with the AI Act?
That's a political question thatdoesn't have an answer today.
Got it.
And so, so now, like, when you actuallyare setting up these standards,
right, so, uh, one of the thingsthat, uh, people ask about it is,
(26:10):
what are things that I should monitorfrom an AI application perspective?
What are the metrics that I need tomonitor, you know, will, are they sort of
specified in these, uh, Uh, sort of, uh,the certifications that, that we talked
about, or are they specified verbatim inthis AI Act, or is it something that I
have to sort of, as a data scientist andmodel developer, need to come up with?
(26:32):
You still have to come up with, basedon your use case, and, and what your
company's mission is, the, these ISOstandards, they tell you about processes,
they say, this is the steps you shouldfollow, but they don't tell you, and
then your equalized odds fairnessneeds to be, uh, consistent with this.
Um, they don't do that, uh, inparticular also because, because
(26:54):
they are international and generic.
Um, that, that guidance that we all want,like what metric do I need to hit, right?
This is very much anengineering question to ask.
We all want it, but I, I think we're yearsout from having, um, clarity on that.
Got it.
And what are some of the most commonmetrics that you are seeing amongst, uh,
(27:14):
people that, customers and organizationsyou're working with as they try to
prepare for, you know, AI, EU AI Act?
What are some of the things thatthey're trying to monitor and evaluate?
So, um, when I talk to customers aboutfairness metrics, one of the most
interesting things that happens is that itputs people on the spot to actually define
(27:35):
their values and write them down in code,and it's always fascinating to watch.
Because everybody thinks, like,well, let's just make the model fair.
And then you have this discussion of,well, Do we prefer equality of opportunity
or do we prefer equality of outcome?
And you get to watch these fascinatingdiscussions as companies start to
figure out what are our values.
Now in Europe, there's basically, there'snothing to hang yourself on and guide you.
(28:00):
What's interesting is that the onecountry that really has a quantitative
standard you can use today is the UnitedStates with disparate impact, which
is a long tradition in jurisprudence.
And so at least that is something thatyou can use in a quantitative way.
City of Way.
Got it, got it.
And what about other things?
Like, for example, does EU AI talkabout accuracy of models, like
(28:23):
hallucinations, which is a very bigtopic these days with generative AI?
And, uh, have you seen people, you know,um, track metrics around those things?
So I see people tracking a lot ofthese metrics that that go into a
confusion matrix if it's a classifier.
I see a lot of people workingwith guard, with LLM guardrails.
(28:45):
This is a very hot topic right now,um, but then when we step back and
we think about the process, like,what are you trying to achieve?
So you're trying to build a GenAIapplication, you're trying to build a
chatbot, so you think about what are mypriorities, what are my risks, right?
That my chatbot is toxic,that my chatbot leaks.
PII, that my chatbot accidentallysells the company for a dollar, right?
(29:08):
And so by, by listing those risksand saying like, okay, if the
chatbot sold, sold the company fora dollar, that would be really bad.
Let's prevent that.
That then leads you to setting up theguardrail, and then you go back and say,
okay, now we've mitigated, not eliminated,but if we mitigated the risk of the
chatbot doing something we, we don't want.
And that's the processthat, that Eyes of 42.
(29:31):
01 But also the AI Actand other standards.
That's what they want you to do.
Think about what could go wronghere, and what technical means
do we have to mitigate it?
And how well does it work?
Got it, got it.
So I guess, like, I think, um, youknow, with the fact that there is
no specific set of metrics thatauthorities want you to monitor, um,
(29:52):
So it comes down to the process, right?
So like, is there an expectedprocess that, uh, that is
outlined in this regulation?
This is one, one of the audiencequestions as well, you know, is there
an expected process that one needsto follow to, to be regulatory proof?
Very much so, and at a highlevel it looks like this.
Think about the risks that your systemposes to something, let's say equality.
(30:18):
Um, you list those, and you assessthem by how serious they are.
Like, how likely is it?
How high is the impact?
What's the impact on thebusiness if it happens?
And then once you've assessed, okay,we have a risk from our, from our
system here to, to discrimination,okay, we've assessed that we
(30:41):
need to do something about it.
Let's think about strategiesfor, for, for addressing it.
Do, can we have a simple solution here?
Do we need to use atotally different model?
What, what do we do here?
You assess the solutionsand then you implement them.
And you see how you do, andif you've mitigated the risks
efficiently, you say, I've succeeded,and then you monitor it, right?
(31:02):
Because we all know models startmisbehaving after some time, and so
maybe after a while it comes back.
And then you start the processagain, you say, what do I do here
on a technical level to get themodel to be within parameters again?
That's at a high level of the process.
Yeah, it's very similar to like,you know, we work with banks in the
United States that have this SR117guideline that you probably also
(31:24):
know, the model risk management.
And they have the same sort of like,you know, do you have a process to
Test and verify your models before youdeploy, and are you monitoring them?
Do you have periodic reports?
And, and then, you know, um, and thenit's sort of like the sophistication of
the process is more, more or less gives,uh, the company Regulatory blessing,
(31:44):
you know, you have, you know, we haveheard oftentimes from fintechs and banks
that the more sophisticated toolingand processes that you can show to a
regulator, the more satisfied they wouldbe in some ways, you know, giving you the,
the, the, the sort of the, the blessingthat you, you're, you're, you're okay
on, on, on the, on these regulations.
Um, cool.
So, so I guess, um, When it comes to sortof, uh, uh, sort of practically getting
(32:09):
one of these certifications, right?
Um, you know, uh, you mentioned,you know, Modulos does that.
Maybe, you know, could you talkabout how you go about doing it?
Um, uh, you know, what are some ofthe assessments that you do in the
process to help a company get certified?
So we, we give you an out of the box,um, AI management system platform.
(32:31):
So it basically, it has all thecontrol frameworks, it has the risk
management system, it has the way to,to, to integrate into, uh, whatever
monitoring stack you're, you're using.
And we give you sort of the, the,the guidance on starting with the
documents maybe you already have and,and, and starting to set up the system.
(32:51):
Um, we're a software company, somany of our customers actually
also then engage, uh, a consultingpartner, um, that has experience
with this to help them set this up.
And, and then there's a process.
It's a, it's a little bit similarto, to, to other audits where, um,
Um, after you've set up the system,you do a phase one audit, and then
(33:13):
you have to operate the system for awhile, and then you can get your phase
two audit done, and then you haveyour, your, your certification done.
And then you need to,of course, maintain it.
I mean, nobody's done this for42001 yet, but then maybe every
year, you have to, have to be,uh, uh, reviewed by the auditor to
maintain, uh, the certification.
And so we give theinfrastructure for that.
(33:34):
We give the automation for that.
Um, and then we work in concert withconsulting partners and auditors,
uh, to help, uh, customers get there.
Awesome.
That's great.
So, so then, uh, switching gears, right?
So, uh, it seems like a lot of theresponsible AI governance, um, uh, people
(33:57):
think that, uh, it's, it's a large companything, you know, you know, if you're
like a very large enterprise and you haveto worry about it, but how do you think
this, uh, EU Act will impact startups?
You know, there are so manynew startups that are coming up
with generative applications.
And what are the opportunities andchallenges, uh, you know, maybe
like both startups and these largerenterprises face, you know, in terms of,
(34:20):
uh, getting ready for this regulation.
I think startups and large enterpriseshave slightly different pain points or
things that they're going to find tough.
Um, I talk to a lot of startupsand they're sort of scared.
They think, uh, this is going to cost somuch money, so much effort, so much time.
How can we do this?
Maybe, you know, the Americanones, maybe we should just
(34:42):
avoid the EU market for now.
And, um, I would, uh, actually,uh, suggest that they look
at it the other way around.
Um, having responsible AI and, andbecoming certified is No different
from doing it for informationsecurity or for data protection.
These are quality signals toyour buyers, to your customers.
(35:04):
These are promises to yourcustomers that you are reliable
and that you are trustworthy.
And so, instead of seeing it as ablocker, I think the sooner you do
it, the more you have an edge overyour competition who won't do it.
And so, on the other side, you havethe enterprise, uh, organizations.
(35:25):
The first thing they do is, whenthey're going to be buying AI from
now on, is say, Where is your 42001?
You know, until now, they'veasked, Where is your SOC 2?
And I think
So, just to clarify that, ifI'm a generative AI startup,
I'm actually selling my SaaSsoftware to a large enterprise.
You're telling me that that largeenterprise might ask me, uh, to
(35:46):
showcase like, hey, do I have a42, 001 certification for my, just
like how we get asked for SOC 2compliance and things like that.
This is already happening.
There's several large tech companies I'maware of who have notified their vendors
to produce the 42001 ASAP or be dropped.
(36:06):
Yeah, and and this isthe other side, right?
As an enterprise in this new era ofregulated AI, you want to be sure that you
know what it is you're buying and you wantto be satisfied that what you're buying is
The people behind it, the company behindit is trustworthy and doesn't just do it
once, but can continue to maintain, uh,AI that isn't going to get you in trouble.
(36:32):
Yeah, makes sense.
I think so what, what it, what it meansis it's not just like, uh, Uh, checkbox
that you want to complete, it's actuallyan enabler for your business, right?
Whether it's a small company or alarge enterprise, this is going to
be an enabler for you in the future.
I think so.
Yeah.
So maybe like, you know, relatedwould be like, how does this EU AI act
differ in its impact across industries?
(36:52):
You know, we have like, you know,different industries, everyone is
now impacted by AI, retail, finance,healthcare, you know, are there
specific provisions tailored foreach of the industries or how does
it impact these different verticals?
And the AI Act is funny, so it hasthis famous Annex 3 where it says
what's particularly high risk, andit's this strange mixture of things
(37:14):
that are highly specific, and thenthings that are so generic where we'll
just have to find out what it means.
So for example, anything to do with HR, orwith education, or with administration of
justice, or public services, anything thegovernment does, is considered high risk.
But then you have provisions like Criticaldigital infrastructure, and now we can,
(37:37):
uh, have a long discussion about whether,whether, uh, something is critical digital
infrastructure or not, um, the, the way Iread it, as long as if the infrastructure
fails, there could be some real worldharm, it could be considered critical.
But we don't really know,ultimately, where that line will be.
So, um, yes, there's some verticals thatare directly name checked, but this,
(38:01):
these sort of broader categories mean thatcould, it could, uh, mean almost anyone.
By the way, this is different fromthe approach in, in the United
States and also the approach by ISO,um, where the sort of risk tiering
by vertical doesn't apply at all.
So in, in the ISO standard, You'resupposed to sort of figure out yourself
(38:22):
how risky you are and the same thing withthe NIST AI Risk Management Framework.
You go through it and you seewhat, what are your risks.
There's no guidance.
Oh, if you're in healthcare,then, then you need to do this.
Yeah.
So, so basically then, um, there'sa related question as well from
the audience on determiningthe risk classification, right?
You know, there's so many conditions,you know, you mentioned, you know, the
(38:45):
regulations themselves have differencesin terms of how they approach it.
Uh, so then like, how do, like if, forexample, You know, if I'm trying to, uh,
if I'm like in a particular, even withina vertical, right, so if I'm a healthcare
company, I may have models from clinicaldiagnosis to customer support, right?
So with various, varyingdifferent risks, how do I go
about this risk classification?
(39:06):
And how do I get ready for this,you know, AI regulation in general?
So, okay, this is, this isperfect as a joint question.
So I think in general, you shouldhave some AI governance in place,
and you should be ready to, um,substantiate whatever AI system you
(39:27):
use that you understand its risks andthat it meets certain quality levels.
The legal question of whether in the EUyou're considered high risk is of course
very consequential, because if you arehigh risk you have to go through this
thing called conformity assessment, whichis basically like an audit process and
you have to register with the EU andyou have to do a lot of bureaucracy.
(39:49):
And of course if you get that wrong thatcould be could, uh, could be very bad.
Um, the way to think about that is youshould treat all your applications with
some level of care so that if it turnsout, oh, You should be in the high risk
category, that it's maybe a lot less workfor you to turn that around instead of
(40:09):
having your product basically be takenoff the market for six months or a year.
Yeah, makes sense.
Cool.
So, so I guess, uh, you know,what do you see the impact of
the EU AI Act influencing theglobal AI standards beyond Europe?
I mean, it seems like everyoneis You know, copying this stuff,
(40:29):
or like, what's your take here?
I used to joke that even thePope now is calling for AI
regulation, because he really did.
What I find interesting about theprocess is that These two assumptions
that we regulate the product, not thetechnology, and that it's risk based,
was copied basically by almost everyone.
(40:51):
There's few, few places where AIregulation really diverges from that.
Actually, one of those placesis China, though I know less
about the Chinese AI regulation.
So the structure is going to besimilar in most markets, most
countries around the world.
The details will differ.
Um, I find it interesting and, uh, howin some countries, um, You notice that
(41:13):
They more or less cut and paste from theAI Act, and, and I would caution against
that because of course that language isheavily loaded with, uh, Brussels, uh,
insider language that is very meaningful,that if you cut and paste it into
another country's culture and languagesuddenly just make no sense anymore.
So caution against that.
(41:35):
Um, one trend we're seeing, uh,particularly in the US, thinking
forward and thinking more about GenAI.
He's going back to this more technologyspecific regulation, particularly saying
this is how we want you to use LLMs.
These are best practices for LLMs.
That's the, I think, the biggestdivergence we're seeing right now from
(41:55):
the EU approach versus the US approach.
Yeah, so it seems like most people aretaking the The common parts of EU Act and
sort of rolling out in their AI Act, so,so it's becoming kind of like the, the
standard across the, across the world.
So there's a related question, right?
So like, essentially, you know, is,is, do you expect EU regulation to
(42:19):
adjust, uh, for businesses, you know,that want to operate, uh, in Europe,
uh, from not receiving this painful,painful, potential painful experience?
How do you feel like?
This is now like a baselinethat they're going to start and
developing the regulation from.
I think, uh, and of course predictionsabout the future are always
(42:40):
dangerous, but I think the AI Actmore or less gives us a baseline.
And I think it will happen similarlyto what happened with GDPR.
GDPR set the standard for privacy.
Some countries then came up withtheir own proposals, but there are
variations on the theme, right?
The, uh, CCPA is a variation on GDPR,and I think we'll see something, uh,
(43:02):
similar, uh, compared to the AI Act.
That said, For, for the, thekeen observer of EU politics.
Uh, the, the commissioner thatwas responsible for passing the
act , uh, resigned, uh, recentlyand in the new vandel underlying
commission, the responsibility that,uh, for AI has actually been split
(43:24):
up between different commissioners.
And there's a new word goingaround Brussels that should
make our ears perk up.
And that's tech sovereignty.
Um, I don't know what's coming there.
But those of us who are interestedin AI and are using AI should pay
close attention to what happens there.
Awesome.
So then, like, I think the most importantquestion to probably end this conversation
(43:45):
is, what is the timeline for complyingwith AIX regulation, and how should
companies prioritize their complianceefforts in the short and long term?
So if we're going straight by the EUAI Act, you have, uh, until the end of
January next year to get rid of prohibitedsystems and train your workforce.
So you have to demonstrate thatyour workforce is trained, uh, to
(44:08):
use AI as required for their job.
The, uh, in ten months, we havethe deadline for the LLM providers
to tell us what's in them.
And we have twenty two monthsfor all high risk systems.
to have completed the process of beingcertified to be on the EU market.
So that is, I think, the hardest deadline.
(44:31):
Um, from the market perspective, I seemuch shorter deadlines from these tech
companies now saying, we're not goingto buy from you unless you're certified.
So they're not waiting two years.
They want to make sure their supply chainis in order much, much sooner than that.
Got it, got it.
So, so essentially the time is now,um, you know, um, to, to operate.
(44:56):
Cool.
Uh, I guess that's basicallyour session for, for today.
Uh, if you have any closing comments,uh, Kevin, um, you know, to sort of wrap
up this conversation, uh, please do.
Uh, but, you know, thank you so muchfor, you know, joining us and giving
us these illuminating insights.
Uh, it's been a pleasure to be here,uh, and to have, uh, uh, discussed
(45:18):
this, uh, fascinating topic with you.
Awesome.
Thank you so much.
Thank you, everybody, forjoining this conversation.
As Kevin says, I think it's timeto get ready and start getting your
AI in order and, you know, puttingin your governance practices.
Please reach out to us if youhave any further questions.
(45:38):
Kevin and myself are available on email.
Thank you.