Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Bloomberg Audio Studios, podcasts, radio news.
Speaker 2 (00:07):
Pan Also our networks manages to fight the narrative of
a downward.
Speaker 1 (00:11):
Draft in the markets. We're up a quarter percent.
Speaker 2 (00:13):
And look we're seeing the cybersecurity leader announced its intent.
Speaker 1 (00:16):
To acquire protect AI.
Speaker 2 (00:18):
It's expanding the company's capabilities in security basically combating new threats,
I mean, an explosion in artificial intelligence.
Speaker 1 (00:26):
All of this as we get the CRSA.
Speaker 2 (00:28):
Conference underway, joining us now Nikesha or a Powo Alto
Network CEO, You've got a lot to announce new security platform,
but let's just go to the new announcement. In terms
of M and A. You're going to go us an
amount that you paid. I was hearing about up to
six hundred and fifty million dollars.
Speaker 3 (00:42):
We're not going to talk about what we paid. Caroline.
Nice to see you. Like we're again at a technology
inflection point. We're all talking about AI. You had people
talking about AAI earlier. And every time you have a
technology inflection point, it becomes very important that we are
able to come forward and provide solutions to our customers
so they can securely deploy technology. And in that context,
(01:04):
we're very excertively and aggressively working on both a build
and buy strategy to build perhaps what will be the
most important thing over the next few years, which is
a platform that allows you to deploy I securely.
Speaker 2 (01:16):
Okay, so let's talk about build versus buy and why
was this asset protect AI so necessary to go inorganic
at this moment?
Speaker 3 (01:23):
Well, kind of like if you look at what's going
on in AI you were on before. People talk about LMS,
people talk about deploying AI based applications. People trying to
figure out how to deploy infrastructure, whether it's on prem
or in the public cloud, with chips set to use,
which model to use. All these are very important technological
decisions which are going to underpin the platforms of the future. Now,
(01:44):
when you put them together and you deploy them, you
have to make sure that you're looking at the security
aspect of every one of these things. Protector AI, after
an extensive look around the market, we found was working
on some very interesting topics which are complementary to what
we've been building. People are building a great run time
platform if we protect our customers. As the DEPLOYAI, protect
(02:05):
OI was working on something similar where they're looking at
all the models in the world, scanning them to make
sure there wasn't bad stuff lurking in them. So the
combination of the two, which we will integrate into one platform,
actually allows us to be more comprehensive and we will
offer our customers.
Speaker 2 (02:21):
I just want to get your birdside perspective here in
a kesh, because no one can push us forward as
much as you can in many ways to the future
of agenic KI.
Speaker 1 (02:28):
And the fact that where are we to ultimately be
protecting ourselves.
Speaker 2 (02:31):
There's demand for traditional basically farwall endpoint products of security.
How does that shift if you're going from a user
level to an agent level in this moment, Well, I think.
Speaker 3 (02:42):
That's going to be the buzzword for RSA Carol, and
people are going to talk a lot about how do
you make agents work? I think it's still unclear. There's
a lot of innovation being put out in the market,
whether as the A to A model, there's an MCP model.
It's all wonderful buzzwords we create in our industry. But
has agendic Yeah, Look, I've said before to me, Agendici
(03:03):
becomes real when you start giving AI arms and legs,
whether robotic arms and robotic legs or real arms and
legs in terms of replacing human beings. And I think
that's where the question becomes, can I rely on AI
to accomplish the task without supervision? And that's where things
will get very interesting and very hairy in certain cases.
There it becomes important only to make sure that the
(03:23):
agent you're giving autonomy to is something that you're very
comfortable will act within the guardrails that you put out
there for that agent. And then you got to make
sure nobody can take over your agent and hijack it
in a way that can make them do things you
don't want them to do. So I think that's going
to be the next frontier of cybersecurity as we get
AI deployed in multiple places, how do we give autonomous
(03:44):
control to these agents? How do we give them agency effectively?
And can't wait for that world to happen, But it's
going to be a whole new set of opportunities that
will open up for us.
Speaker 2 (03:55):
There are a lot of names trying to make the
most of this opportunity. A lot of them are the hyperscalers.
A lot of them in many ways become your competitors
as they add their own security offerings. Where do you
sit in this whole frenemy environment.
Speaker 3 (04:08):
Well, you know, as we fascinating Caroline, logically, the cloud
providers should have been our competitors and cloud security. They
should have our competitors and endpoints secure, but they're not.
They're focused on making sure technology gets deployed as quickly
as possible, which is good perhaps for their business for
all of their customers. Our job is to make sure
that we stay in lockstep with them and work with
(04:29):
our customers to make sure they can deploy technology in
a secure fashion. Let's be fair if our customers have
comfort that when they deploy AI, when they give autonomous
control to AI to do some repetitive tasks or interesting tasks,
it can be done securely. The moment we can provide
the underpinning of trust, the underprinting of reliability, the fact
that if you deploy with follow auto networks, there is
(04:52):
a very high probability that it's going to be much
safer than anything else. I think that's where the winning
combination happens. So I don't see this as sort of
a competitor environment for now with the hyperscalers. I see
it as an opportunity for us to work together and
make sure we accelerate the adoption of the technology, as
opposed to have our customers be confused.
Speaker 2 (05:09):
Which way to go. I mean used to work out Google.
You host your products on Google Cloud. How do you
feel about their big splashy deal for wiz.
Speaker 3 (05:20):
It'll be interesting to watch. I have lots of conversations
with people at Google about it, and we don't intend
to change how we deploy our products. We buy infrastructure
from them, and sometimes we see them in the market.
Our hope is that our customers will understand that you
want an unbiased product that can operate effectively on every
cloud provider as opposed to something that is beholden to
a particular cloud provider. We still need to solve for
(05:43):
everybody else out there, so I think that's our sort
of ethos. We want to be independent. We want to
be somebody that can deliver the same level and capability
of security across every platform there is out there, so
our customers don't have to spend time trying to integrate
all this stuff together, which is our entire around platformization.
Speaker 2 (06:01):
I love following what you're doing in terms of platform offerings, products,
what you're doing in terms of well the latest MNA,
but I also love following on social and perhaps whether
you wanted to or not, you involved in the AI
debate about whether we're going to get as much CAPEX spend,
whether infrastructure is reality versus hype.
Speaker 1 (06:19):
Where do you stand on it right now? Nikesh?
Speaker 3 (06:22):
I think if you look around you, everybody is getting
ready for a very large adoption of AI scenario because
you see tens of billions of dollars being committed by
people in terms of building infrastructure. I think that's right.
I think we may get the sort of the timing
not perfect. You know, you may build too much before
it's all consumed. But I think it's headed in the
(06:42):
right direction. I think short term, in this twelve to
twenty four months timeframe, you could see that a lot
of the investment that's going is going towards innovation. I say,
make a smarter model, make robotics work. So you need
a lot of power, you need a lot of compute
to get to a place where these models become extremely useful.
Once you get to a point where these models are
extremely useful, the question becomes how do I deploy them
(07:05):
in my business? How do I, as a regular company,
regular customer, deploy these things effectively securely. I think that
could take a little longer than people think because everybody's
experimenting and we're not all experts yet. Now this is
something that came about less than twenty four months ago,
so we all have to get our muscle understand how
this affects our business. How do I build robotic things
(07:25):
that can do stuff for me around my enterprise? How
do I build AI agents that can do stuff for
me on my enterprise? Do I build or do I
buy from somebody? So all that stuff will take a
little longer to pan out, But when it happens, we're
going to need all that capacity that's being built, and unfortunately,
you can't wait to build capacity. You have to build
it ahead of demand. So I think I think it's
the right direction. Timing's still to be figured out.
Speaker 2 (07:48):
Well, thanks for sharing your expertise across all the subjects
and the deal news nikesha or go enjoy RSA.
Speaker 1 (07:53):
We thank you. Paloelta Networks CEO