Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Unknown (00:00):
We have to remember
that the learning models and
(00:02):
everything that we've done withwith Gen AI and the advancements
of Gen AI and how it's helpingwith developer velocity and
modernization approaches. It'sin its infancy. Its early
stages.
Antone Gonsalves (00:16):
Hi, and
welcome to the tech news this
week podcast. I'm your host,Antone Gonsalves, editor at
large with techtarget editorial.There were a couple of important
developments in generative AIthis week, Google introduced its
new AI system, Gemini, and IBMand meta join over over 50 tech
(00:38):
companies and educationalinstitutions to launch AI
Alliance, a community focused onresponsible AI, here to talk
about these events and more Paulnashawaty, and Mike Leone, of
tech targets Enterprise StrategyGroup. Welcome, gentlemen.
Unknown (00:57):
Thanks, Anton. All
right. Thanks for having me.
Yep.
Antone Gonsalves (01:00):
So let's start
with Google. Google claims
Gemini is more powerful thanopen AI is GPT. Four, of course,
we have no way to verify that.So we'll assume that Gemini and
GPT for at least competitive,but an issue not addressed
beyond the power of the systemsis the problem with the
(01:22):
inaccuracies in the responsesfrom large language models,
called hallucinations in theindustry. You know, Amazon
customers at reinvent, lastweek, you know, told me that
hallucinations was a criticalproblem and the use of these
large light language models. Sowhat is Google as let's start
(01:44):
there, what is Google Microsoftand Amazon's cloud service AWS?
What are they doing to helpenterprise customers with that
problem? And are theysuccessful? What are you hearing
out there?
Unknown (02:00):
Yeah, I'll start Paul.
You can jump in. And I'll say
this. We're doing a better jobas a collective market, and
industry to be able to gauge howwell models do compared to one
another, with a rising up ofwhat's called an industry
(02:21):
standard benchmarks. So that'shappening. And as they released
Genesis, there were a couple ofbenchmarks with several
benchmarks where they weretrying to rank themselves
against some of the othermodels, and it does very, very
well. Right. And that'sfantastic. But a big component.
And a big point that youmentioned is around the
sophistication, right? Google iscoming out saying this is the
most sophisticated, they're alsosaying, this is the most
(02:44):
responsible now that didn't shedmany details on the responsible
layer. But that's a big theme.Right? Now. It's avoid
hallucinations. I'm, I'm big onaccuracy is King period, you can
do anything you want with thistechnology, you can deploy it,
monster large language models,you can put tons of
infrastructure behind it givesme the wrong answer. I'm not
(03:06):
using it I'm sorry. You know, sothat's a big reason and an
emphasis on just responsible AIas a whole, because accuracy is
a pillar there. And theseorganizations, these big cloud
providers, they're doing a tonto try and improve the
reliability, thetrustworthiness, the governance,
(03:27):
the security. Now, with all thatsaid, metal also recently
announced that they disbandedtheir responsible AI team,
right. So you hear this bigemphasis on, we need to be good
and accurate and governed andsecure and responsible,
whatever. And then met it goesand does that. So it's early
days, I know they're buildingout teams. There, you know, I
(03:51):
know Google specifically, theyhave a really well structured
process and approach for kind ofmonitoring and making sure
organizations are doing thingsthe right way. And the other
thing you'll hear from all ofthem, and they stress this a
ton. We will not use customerdata to influence or train our
(04:12):
models. Right. So that's like atheme we hear everywhere. And
then by the way, AWS reinventlast week, so many
announcements, all associatedwith trust. Right. So, you know,
I know Paul and I, we did avideo recap there. But the
theme, and it's gonna continueto be a theme. I say it all the
time. 2024 is the year ofresponsible AI. Right?
(04:35):
Yeah, absolutely. Mike, and youknow, and thanks for that. So
quite in depth kind of analysisthere. One of the things I want
to touch on, you know, as welook at the use cases, and we
you mentioned 2024. Right, Mikeand when I think about what we
saw with Google, what we seewith the announcements this week
around it was a ton ofannouncements on on AI and in
(04:56):
the alliances and things thatare happening, but when we look
at what's happening, and I wantto echo a couple of points that
Mike brought up, you know, I'mlooking at it in the context of
developers and how they'rebuilding new code and
productivity and such and howthat all works. And that's an,
it's important to understandthat but when I look at, Mike,
(05:16):
you triggered something on theresponsible AI piece. And I want
to touch a little bit onregulations, because what we're
seeing is an emergence. And Ibet in 2024, we're going to see
a whole lot more of this, ofregular regulations that are
going in place that's going tofocus around Gen AI. Now open AI
investigations in Italy is goingon right now. We have lawyers in
(05:39):
Poland that are dealing with newlawsuits that are around GDPR
violations. And we're seeing theEU Data Protection Board is
launching a task force at acoordinated enforcement around
chat, GBT. All this is happeningnow, right? And that's around
that responsible use of AI. Sowhen you look at that, and start
modernizing your environments,and start building your
(06:01):
environments, I really like whatthe these announcements this
week, because it introduced anew level of competition, we
have to remember that thelearning models and everything
that we've done with with Gen AIand the advancements of Gen AI
and how it's helping withdeveloper velocity, and
modernization approaches, it'sin its infancy, it's early
stages,
Antone Gonsalves (06:21):
but not but
these organizations idea AI
alliance that was launched thisweek by IBM and meta. And then a
while ago, you had thePartnership on AI, I mean, to a
large extent that these theseorganizations set up for tech
(06:42):
vendors to stay ahead ofgovernment and always try and
mold the discussion around thesafe for a use of AI that will
protect businesses and inpeople.
Unknown (06:58):
Yeah, I'll take a shot
at that one there. Because I
think that when I look at theAlliance here, I really, I want
to echo the point on thecompetitive landscape, right?
And what, when you look at whathas been announced this week,
the innovation that's drivingtowards results is addressing
business need, okay? Theregulation and governance that
(07:19):
goes in place behind it isreactive to what technology is
being created to address thatbusiness need. So we see
businesses that are looking togrow and modernize based on
their, again, their businessdirectives and business needs.
But then there's also well, howdo you move that monetization
forward by being responsible,and that's when you start
(07:40):
putting those governance andthose rules in place. So there's
going to be a little bit ofevolution in this whole process,
because we have to see whathappens before we actually start
putting governance andregulations in place. Because I
think that we're going to findthat you know, the movement of
creating new code and tomodernize, we're seeing in our
research, the amount of work ittakes to do the efforts that are
(08:03):
being delivered today is two tothree times more than what we
saw just a few years ago, justtwo years ago. And that's we're
finding that in our research,what we're also seeing is the
amount of output, you're goingto see from a modernization
perspective, is not aboutthrowing more bodies at it, it
comes down to automation, and itcomes down to utilizing AI to
(08:24):
make sure you have the rightautomation in place, and the
right tool sets in place inorder to meet the needs of the
business acceleration andvelocity. Sure.
Antone Gonsalves (08:33):
I mean, yes, I
mean, that's, that's why
everyone's so interested in AIon the enterprise side. But it's
worth noting that the techindustry's record on safety
isn't so great to me, I waslooking at social networks and
social media, I mean thatthere's so so that's worth it's
worth noting that now thatthey're going into generative
(08:53):
AI, which is transformative in away it can have impact on many
facets of society and where welive, I don't think that given
the tech industry is record thatwe should necessarily leave it
up to them. So So I thinkthere's going to be some tension
here, if not between techcompanies, customers, enterprise
(09:13):
customers, consumers andgovernment. I think this is
going to be a battle that'sgonna go on for a while.
Alright, so I want to move on toanother topic. And that's open
source, the open sourcecommunity in the development of
generative AI, what do you see?How do you see open source, the
open source community within thedevelopment of generative AI?
(09:38):
What role do they play in this?
Unknown (09:42):
Yeah, I'll go first
here, Paul, and I know that you
have thoughts on this and we'vetalked about this several times.
Right. But for me, so much aboutGen AI right now is just goes
back to what we're talking aboutearlier, responsible AI
transparency, at least opensource I can provide a level of
transparency for organizationsas they go and pursue generative
(10:05):
AI, there are some models thatare private and they're closed,
right. And that's an issue,especially if those
organizations don't even knowhow they work. Or hey, I don't
know, I can't cite where thatdata came from, at least open
source gives a level oftransparency gives a level of
organizations being able tobuild off of it in a way that I
(10:31):
don't know they can trust,right. And that's kind of the
big thing for me. It's thattransparency, and then it's a
level of control on top of it.So hey, great, I can see how
this was built. I can see howit's changing over time, hey,
now I want to add my data. And Iknow I can integrate in
different unique ways I canbuild my own integrations, I can
do all these different things.So from that standpoint, it's
(10:52):
really for me, it's atransparency component. And
we're just scratching thesurface on the transparency
side, too, right? I think weneed to go a lot deeper there.
But yeah, Paul, what are yourthoughts there?
Yeah, a lot of thoughts there.Like I mean, you know, and you
can go in a lot of directions.But I think I think when I look
at open source, you know, I wantto kind of bring it back to
data, right. And when I thinkabout it, in the context of
(11:14):
where development teams aregoing, they're moving from
experiment, experimentation toreally embedding a, you know,
Gen AI into their softwaredevelopment life cycles, what
we're seeing is developers maygain upwards of 50%,
productivity, on average, someuse cases, we're showing over
200% gain, in average, on thedevelopment. So really, really
important, right, and that'sdirectly related to open source
(11:37):
technologies. I agree with youlike, you need to have open
source. So let the communitytouch and use the product expose
any weaknesses that may happenin the, in the solutions, the
ecosystem matters, exactly. Toyour point, like, but the other
thing that's incrediblyimportant, and I think that this
is a prediction for 2024 is therise of the increase in the
(11:57):
importance of the self servicedelivery, developer portals,
we're gonna want to seecompanies like you know, for
example, backstage, you know,the developer portal that was
created by Spotify and donatedto the CN CF, that is absolutely
based on open source, right. Andthat's an open source technology
that is being used to helpproductivity and drive those
results, you're going to seemore and more of Gen AI and AI
(12:18):
solutions embedded into theseopen source operating
opportunities. So you canactually produce and create that
developer velocity thatorganizations are looking for.
Okay,
I want to add one thing to this,because it's really important,
we actually have research datakind of asking organizations,
hey, as you use and developgenerative AI, you know, what
approach are you taking, and 53%of organizations, so a majority
(12:42):
of organizations that arepursuing Gen AI right now.
They're going to be utilizingopen source and open source LLM
under the hood, whether it'sthem using it themselves, or
working with a third party tohelp them ramp up their Gennai
initiative with open source, but53% is the number that we have
right now. That's,
Antone Gonsalves (13:03):
that's
significant. That's significant
in you mentioned, and youmentioned transparency on the on
the models. That was anotherpoint that AWS, Amazon
customers, all mentioned biases,that they were all very
concerned with the biases in thelarge language models they use,
(13:24):
and they fear that if they useit on an HR application, they
could be discriminating, youknow, against new hires. So
possibly this transparency, ifopen source model is more
transparent, where theenterprise can actually see how
it's coming up with the answers.It does. That could be it could
(13:46):
be a benefit, I mean, absolutebenefit to more proprietary
systems offered by vendors, youknow, so we'll see how that
develops. That's an interestingtopic. And we'll we'll revisit
it again, I'm sure. Paul, Mike.Thanks a lot. I really
appreciate you joining me on onthe podcast. Always a pleasure.
(14:07):
And Tom. Thank you. Yeah, thanksso much. Security is critical
with everything I T. So here todiscuss the latest security news
is Melinda marks, an analystwith tech targets Enterprise
Strategy Group. Melinda, thanksfor joining me.
Melinda Marks (14:27):
Thanks for having
me.
Antone Gonsalves (14:28):
Sure. You
know, I mean, I talked to a lot
of AWS customers, and I was atthe show, and I did find it
interesting that they did. Theymentioned privacy, and
particularly securing their owndata as major issues for them.
And this was despite the factthat they use AWS. So it seems
(14:51):
like there to me there was a bitof a disconnect here where the
customers are expressing someserious concern with their
privacy, but at the same time,you would think that they would
be, they would feel somewhatcomfortable because AWS is
supposed to help them do all ofthat. Why do you think there's
(15:11):
a, there's a disconnect, thereseems to be a disconnect there.
Yeah,
Melinda Marks (15:15):
I think it's just
because of the nature of AI and
ML. So machine learninggenerative AI, it takes a lot of
data to train these models. Sofor it to be more effective and
more accurate, it needs moredata. So there's that concern of
as you make better models andimprove the technology, you
(15:38):
know, there's, you want to makesure that it's not going to pull
sensitive data and certainthings that you're sharing with
these models will just end upsomewhere. So I think it's that
balance of creating good ml goodgenerative AI tools, and making
sure that as we develop thesetechnologies, as we progress and
(15:59):
evolve, that you know, thatyou're contributing, but your
data isn't going to be shared.And that's, it's it's a tricky
thing to balance.
Antone Gonsalves (16:08):
Yeah. And I
think it's, you know, the
technology is so new. Andobviously at that reinvent,
there was a whole slew ofsecurity announcements. So
evidently, the cloud providersor in this case, AWS, they don't
really have all their tools.ready yet. I mean, rolling it
(16:33):
out. So that so I was assumingthat would customers
experimenting with generative AIthat would help that would make
them feel nervous, too, is theydon't have the tools yet.
They're just starting. Is thatcorrect?
Melinda Marks (16:45):
Yeah, definitely.
There's a lot of experimentation
going going on. And then also,all those CSP players are key in
this generative AI race, to havea leader and, you know, have the
most innovation and the biggestcompetitive advantage. So it'll
be interesting to continue towatch as they, you know, show
(17:06):
what their differentiators are.I think at reinvent, they were
trying to talk about freedom ofchoice of models and
flexibility, meeting thecustomers where they are. I
think that's a theme that we seefrom all the CSPs. But I think
people realize that, whicheverCSP they're using a lot
generative AI is going to be keyto usage and continuing getting
(17:29):
the benefits of digitaltransformation for business
growth.
Antone Gonsalves (17:33):
Yeah, for our
listeners, CSP means a cloud
service provider. Okay, soAlright, so is it too early to
see any differentiation betweenthe major cloud providers AWS,
Google, Microsoft, in terms oftheir approach to security?
Melinda Marks (17:54):
Yeah, I mean,
they're all this year has been a
big year of generative AI, andMicrosoft's activity with chat
GPT. You know, they're a clearfront runner, Google has been in
the AI and ML and now Gen AIgame for a long time. So they
have models that, you know, theyhave assistive features that
(18:17):
they touted earlier in the year.And I think they've had a lead
on this for a while as well.Reinvent, I think was playing
catch up a little bit. But as mycolleague who covers Gen AI, has
said, it seems like they'recatching up. And they did a good
job of, you know, trying tocatch up. So it was really good
to hear what AWS offers, justbecause there's so many people
(18:39):
who are using AWS, and they needa lot of those features as well.
So
Antone Gonsalves (18:44):
how should
tick buyers? Think about our
listeners? How should they thinkabout choosing between these,
these cloud providers?evaluating them in terms of
security, since all of this isso new, and they're in all of
(19:04):
them, all of them are seems tobe trying to grab jump on that
marketing train and make allthese announcements. Yeah,
Melinda Marks (19:13):
yeah. Well, and
hopefully people will realize
it's, it's not just it's notjust marketing or buzzwords,
like this is a real competitivedifferentiator for cloud service
providers. Because you want toput your your workloads into
infrastructure and platformsthat secure and well
architected. And you know, youhave a shared responsibility
(19:36):
with a cloud service provider.So they're taking care of the
security of the platform thatyou're putting your workloads
in, but you're also looking forwhat are they doing to help you
with you're part of the sharedresponsibility model of making
it easy for you to secure yourworkloads, and every CSP is
architected differently. You'llsee at the different shows they
(19:59):
have different chip They havedifferent architectures. And
their platforms are just builtdifferently. And so the way that
you can, the way that securityis incorporated is different for
each cloud service. And so it'sreally important for
organizations to look at thatlook at what the CSPs provide
(20:19):
both on the platform level, andthen also what they do to help
you with your workloads. Andsometimes it's extra features
and services that can beenabled. So each of them has
things like cloud security,posture management services. So
you can check yourconfigurations of your workloads
that you know at that higherlevel of the shared of what a
(20:41):
customer is responsible for. Andthey also take into account like
they want their customers to besuccessful with security, they
don't want them facing facingmore incidents as they move
their workloads to the cloud. Sothey are offering they offer all
these extra services andfeatures city, they have all
these extra capabilities andfeatures and they continuously
(21:03):
upgrade them. And then customerscan you use that they also have
different types of monitoringplaces where you can pull from,
and they're integrated withsecurity vendors. So if you're a
big user of a certain securityvendor or security tool,
ideally, they're pulling thedata and there are, they're
integrated well with that CSP.So my advice is always just make
(21:24):
sure that security is involvedin any technology decisions, it
doesn't happen enough. A lot oftimes I talked to CISOs, who
feel like they're just having toget into reactive mode to what
IT ops or the CIO has picked fortheir CSP or for developers or
DevOps. And that's kind ofdriving things as a push for
(21:44):
productivity and security is isyou know, it's kind of something
that you look at of oh, now I'mstuck securing what these other
people chose. Now, it's moreimportant, I think, for even an
ITN ops to look at the securityfeatures and capabilities, but
ideally, also the security teamas well incorporated into that
(22:05):
decision making process so thatthey can also evaluate it and
Turman what they need to do fortheir jobs, which is managing
risk and making sure that ifthere are attacks or threats,
they can detect and respond asquickly and efficiently as
possible. And whatever toolsthey're using there. They have a
clear strategy for making surethat things are efficient.
Antone Gonsalves (22:27):
Okay. All
right. Sounds like a lot of work
needs to be done before.Enterprises even think about
opening up the wallet to any ofthese vendors. Thank you very
much, Melinda. I appreciate youjoining me on the podcast. And
that wraps up this week's show.Thanks for listening