Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:08):
Hello, I'm Karen Quatromoni,
Director of Public Relations forObject Management Group, OMG.
Welcome to our OMG Podcast series. At OMG,
we're known for driving industrystandards and building tech communities.
Today we're here with Bill Hoffman,
who is the Chairman and CEOof OMG. And Claude Baudoin.
(00:32):
Bill?
Thanks, Karen. Hey, Claude,nice to see you again.
Could you tell us a little bitabout yourself, your company,
and your affiliation with OMG? You'vebeen with us for an awful long time.
Yes, Bill. That's true. I'man independent consultant.
My areas of specialty is ITstrategy and knowledge management.
(00:52):
I started that consultingcompany 16 years ago now,
but I have a total of over 50 yearsof experience in software and IT
management. I'm based inthe San Francisco Bay area,
and I've participated in OMG'swork for most of its existence.
And currently I co-chair threedifferent subgroups within
(01:15):
OMG, the AI Platform Task Force,
the Business Modeling andIntegration Domain Task Force,
and a small group calledthe Cloud Working Group.
Excellent. Well, we certainlyhave got a lot of coverage here.
We appreciate all the effort you've putin and all the time you've spent helping
us build our standards. It's beenquite the adventure. I understand.
(01:37):
The SDO recently published a discussionpaper entitled Artificial Intelligence
and Cloud Computing. Can you talka little bit about that paper?
Yeah. The purpose of that paper wasto explore the synergies between
artificial intelligence,AI and cloud computing.
And basically the purpose and thescope is to talk about some use
(01:58):
cases for deploying AI capabilitiesand AI services in the cloud
rather than on premises toexplain what kinds of AI
services are typicallyavailable in the cloud to give
advice on how to selectan appropriate provider
of AI services in the cloud orcloud services that support AI.
(02:22):
And we also write quite extensivelyand quite importantly about governance
issues, how people are supposed tocontrol the lifecycle of the models,
the application, and the datasets used to train AI models.
And we also touch on how AI can help
cloud users to support their deployment,
(02:45):
for example to strengthensecurity in the cloud. Also,
how AI can help discover whatAI services are available.
So that's quite a broad scope.
These technologies, AI and cloud,they were quite separate. Now,
what changed?
Well, they did develop separately.
Obviously AI was born at theDartmouth Conference in 1956.
(03:08):
Cloud computing really startedcoming into existence in the late
1990s.
What has changed is thatthe high demand on computing
resources to run complex AI,
the more modern AI that's basedon complex neural networks and
large language models, LLMs,like ChatGPT, and others.
(03:32):
This demand on resources means thatfor many organizations it's now
easier to rent the required infrastructure
resources in the cloud as opposedto go through the process of
procuring, installing and managingthose resources internally.
And that's why the cloud hasbecome a key enabler of AI.
(03:56):
That makes good sense. This whole on-prem,
off-prem discussion is very keyto people's deployment strategy.
What do you need todeploy AI in the cloud?
Does your paper address these things?
Yeah,
and actually that's a good segue to theprevious question on the convergence of
the two technologies.
So we need to go back a littlebit into the history of this.
(04:17):
AI has changed considerablysince its initial stage.
So there was this initialidea in the late 1950s,
and then there was a first periodof development and offerings
of AI in the 1970s based onwhat was called expert systems.
And then AI lost its appeal,and because it lost appeal,
(04:39):
it lost funding throughoutthe 1980s and 1990s.
And we call this the AI winter,
and that's because it reallydidn't achieve its promises.
And there were several reasons for that.
But one key reason was thatAI required a lot of computing
resources that were generallynot available at that time.
(05:01):
While this was happening, of course,
computers became faster and specializedtypes of computing units were
invented, such asgraphical processing units,
GPUs. Those were initiallyintended for graphics work,
but they turned out to be idealtypes of resources for neural
network computations.
(05:21):
So as the infrastructure evolvedand suddenly became able to
sustain the high intensity ofcomputing that was required
by more modern versions of AI,
then the question fororganizations became,
what do I have to do if I have to equipmy organization with these sorts of
(05:44):
resources [that are] goingto cost a lot of money?
How do I know that I'm going toneed this for a long period of time?
What if I buy something and oneyear later it's obsolete and
I'm still depreciating my old hardwarefor years when I'm not using it,
et cetera?
So the outcome of that is that for many
(06:05):
organizations,
it would be too costly andtoo risky to install all this
in-house. And the cloud inthe meantime had emerged,
has matured. There's massiveamounts of resources available
in the cloud by big and small vendors,
and people have come to realize thatthere were lots of advantages to doing
(06:28):
AI in the cloud. Thereis also another factor,
which is that if youonly need that level of
resources, maybe,
during the phase when you're traininga model but not later during the phase
when you're executing the model,what's called the inference phase,
then the cloud has oneproperty called the elasticity,
(06:51):
which is that if you reduce theamount of resources you use,
your monthly bill also goes down.
So that's been very appealingand all this converges
to explain why you may wantto deploy AI in the cloud as
opposed to on premises.
But the paper does address some ofthe reasons why the opposite might be
(07:13):
true or why you might wantto deploy AI even at the edge
in an industrial networkthat is on the devices closer
to equipment or to the field.
Very good, thanks. Yeah, thepaper mentions four large vendors.
What about the others?
Well,
typically the 500-pound gorillasof AI services in the cloud,
(07:37):
and that's Amazon,Google, IBM and Microsoft.
And that's because they havecomprehensive offerings.
And the breadth and depths of thoseofferings allowed us to really
discuss how you select services.
But we do take pains in the paper toexplain that these are not the only ones.
And depending on whatAI services you need,
(07:59):
there may be other companiesthat would be perfectly suitable.
So as usual, peopleshouldn't just say, oh,
I'm going to use Google justbecause it's a good name,
or IBM or Amazon or Microsoft. Theyshould say, what does my business need?
And then do some due diligence and findout which services you intend to use,
who's offering them at whatcost, what services they offer,
(08:24):
what flexibility they give you potentiallyto expand later in using services
that you were nottargeting at the beginning.
So there's a whole lot of otherservices we could have talked about.
And for reasons of space andtrying to focus, we didn't.
But I could just mention HPEnterprise has GreenLake,
(08:45):
there's Snowflake, there's CoreWeave, there's Lambda Labs,
there's H2O. Oracle has anoffering called Oracle AI.
There's Vertex DataRobot. Imean, the list goes on and on.
So do not take the fact that we talkedabout four providers as an indication
that we're either partial to themor that they're the only ones.
Excellent, excellent. Thatmakes great sense. Claude.
(09:07):
So can I migrate a trainedAI model from one vendor's
platform to another, oram I locked in? I mean,
this has been going on since we startedcomputing, right? Vendor lockin.
This is still a challenge today.We do mention this in the paper.
We talk about migrating eitherbetween different clouds or
(09:28):
migrating from a model that youdeveloped internally on premises with
perhaps some limited resources,
and then moving it to the cloud eitherbecause you want to expose it to internet
users and commercialize it to the masses,
and then you want a biggerplatform to do it on,
(09:49):
which might be a cloud service.
So migration is either cloudto cloud or on-premises to
cloud. There's even some people who arelike, okay, once my model is trained,
I want to re-insource it and executeit in-house instead of in the
cloud, maybe due to concerns aboutsecurity of the data. So ideally,
(10:09):
you don't want to have to restartfrom scratch and retrain the model,
and different platformshave different requirements,
and all the models are notnecessarily portable directly from one
platform to another. So you're quiteright to evoke the specter of lock-in.
The OMG SDO,
(10:30):
the Standards Development Organization,
has another project going on right now.
We just issued a request forproposals for portability and
interoperability of neural networks, PINN,
and we're awaitingsubmissions to that request.
So that's exactly because we've identifiedthat problem and want to help solve
(10:52):
it.
We'll probably have some standardemerging from OMG to help
resolve this in 2026.
Excellent. Excellent. SoClaude, you mentioned security.
Do you have any guidance on how toprotect data from leaking to unintended
parties, for example?
So there's a whole fieldof cloud data governance on
(11:13):
which OMG has alreadypublished some papers.
So we did one a couple yearsago, I think it was in 2023.
I would recommend thatpeople really look at it,
and you can find it on thecatalog of Cloud Working Group
deliverables at omg.org/cloud.
You have to think about some ofthe risks that are inherent in
(11:36):
using a model that isdeployed in the cloud.
For example,
is the model that you're training going to
retain some of the knowledge gainedfrom the training data that you as a
customer is supplying to that model?
And is that data going topercolate, so to speak,
(11:59):
into other applications forother customers of that cloud
provider? Is the model aggregatingeverything it receives and
actually making otherpeople benefit from it?
So you want to make sure thatthe model doesn't get better
on behalf of other customers,
(12:20):
which may be your competitors.And the way to address
this is to look at your cloudservice agreement. I mean,
you should always lookwith a fine and scrutinize
extremely well any cloud serviceagreement. We also have papers about that,
by the way. But in thecase of AI services,
(12:41):
one category of challengesthat should be addressed is the
provider guaranteeing that the datayou supply to the model is not going to
be exfiltrated or reused for the
benefit of others thatyou don't approve of.
There is no one sizefits all answer to this,
(13:04):
but these are questions that you needto address with the cloud provider
and look for a suitable responsein the cloud service agreement.
Very good. Excellent. So Iknow the paper just came out,
but do you plan to issue revisions?
Well, we always plan to issue revisionswith all the papers we publish,
where sometimes we publish a revisionevery three years, sometimes we
(13:30):
take longer to publish a revision whenthe technology that's being addressed is
a little bit more stable.
It is clear that this isan area that is evolving
rapidly. And so we do expectthat there will be revisions.
The paper came out at the end of 2024,
so by 2027 we shouldreally have a new small
(13:52):
project to update it.
People who are watching us orlistening to us should really
think of whether they haveinformation to supply to us to feed
that revision. And the CloudWorking Group at OMG is an open
organization that people can actuallyjoin for free. We have a mailing list,
(14:13):
we have a LinkedIn group,
and we welcome contributionsby anyone who has
expertise in this area.
So please don't hesitate to contact usin order to plan to contribute to such a
revision in the future.
Excellent. Claude, last question.
How can organizations get started withcloud and AI in addition to what you just
told us?
(14:34):
Most organizations already usethe cloud in various forms.
Many organizations are stillrelatively early in their AI
journey. The first steps thatpeople should consider are:
What is it that you want touse AI for? And secondly,
does it make sense to do it in thecloud as opposed to on premises or in a
(14:58):
multi-cloud or even at theedge? And at that point,
you can investigate availablesolutions and choose them.
And at that point,
you can really benefit from joining OMG,
if you haven't contributingto our papers and helping
us evolve and revise someof the guidance we're
(15:21):
providing. We have a wealth,
we probably have now something like 38different discussion papers available
at omg.org/cloud.
So I would say a goodfirst step is to select
from those papers. Youdon't have to read all 38,
but find the few that are relevantto your problem and use them as
(15:43):
guidance as you start your journey.
And then instead ofdoing this in isolation,
join our efforts in order to benefitfrom the wisdom of others and share your
own wisdom with others.
That's excellent. Thanks, Claude.
Thanks so much for sharingyour expertise with us today!