All Episodes

October 2, 2024 • 55 mins

Summary (AI-Generated) what AI is and how it works; why we are seeing so much emphasis on AI these days; the dangers of AI, such as data exposure and wrong information; what Cisco is doing to secure AI; recommendations for customers who are using AI. Some key points from the video: AI is a fancy expensive autocomplete. We are seeing so much emphasis on AI these days because we have more resources to really see it explode and to see the benefits of it. The dangers of AI include data exposure, wrong information, and hacks. Cisco is working on securing AI by monitoring it, testing it out all the time, and keeping it secure. Recommendations for customers who are using AI include monitoring it, testing it out all the time, and keeping it secure.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Welcome to the show everybody. Today is Wednesday, July 24th. We're broadcasting here live from Research Triangle Park in North Carolina.

(00:09):
Got Sue Deer here as one of the guests. He's actually sitting right beside me. You can't see him on my screen, but you can on his.
Andres, good to see you as always. We have a fantastic show today on artificial intelligence.
Not just on AI, but specifically the security aspect of artificial intelligence.
Everybody claims to have AI now, and we see it in every advertisement related to technology. It's on our phones. It's in our cars.

(00:36):
It's very cool and very exciting, and it's going to remain that way.
It's going to remain that way until it gets majorly hacked and we're all scrambling. So, as we security-minded people know, security is usually an afterthought
when new exciting technologies get rushed to the market, and AI is absolutely not an exception for that.

(00:58):
Exactly. Usually and unfortunately, it's always the last thing that we think about. And when we're creating new technology,
look at the examples that we've seen from cloud technologies early on.
How to secure those, all the hacks that happened when it was maturing and it is, I still say that it's still maturing.

(01:22):
So, it is an interesting thing to think about security for a new technology.
Now, companies are already using AI. Most of them have already thought about how to use AI, but a lot of questions come up.
Are they exposing any type of information? Is it secure? Is it safe? And that's basically what we're here to do today.

(01:49):
We're going to talk about some of those things. I'm excited that we have Joel Sprague and Sudhir Desai on the show today.
These guys know a lot of what's going on with AI, and we're going to talk about these concepts, and it's going to be a super, super, super nice conversation today.
So, with that, I'll pass it to you, Joel and Sudhir, to introduce yourself and get started with the show.

(02:16):
Sounds good. Thanks, Andres and Mike. I am Joel Sprague. I am an account SE here at Cisco. I've been here about five years.
Prior to coming here, I ran a private cloud for a Fortune 100 and, you know, have basically been in IT for 25 years and continue to, you know, enjoy the fact that there's always something new to learn.

(02:37):
And how to secure it. Sudhir? Indeed. My name is Sudhir Desai. I am also an SE, however, comma security. So, security generalist SE, I guess, whatever our names are this week.
But I've been in the sales role for about five years now, 2018, however long that was. I'm not really counting 2020, so I think that's like five years or so.

(03:07):
But before that, I was over on the support side, came in through the Sourcefire acquisition.
So, I have seen the fall rise, fall rise of the firepower work set. So, very nice to meet you guys.
All right. Well, we're super excited to have such talent on the show. All the brain power. Andres, I'm interested, you know, if we combine Sudhir and Joel and we compare them to like an artificial engine, like, I don't know who would win that battle when it comes to the brain power there.

(03:38):
So, we're going to find out today, though. A lot of cycles on that one.
One of the questions I would like to throw out there just before we get started into the security aspect, Joel, I'll throw this to you to start it off. Just in general, how does AI work? What are the components?
Maybe talk a little bit about where the data actually lives.

(04:01):
Sure. So, you know, AI is a huge field that encompasses a lot of things, but here in 2024, that's a fancy keyboard. Okay. Here in 2024, what we're generally talking about is generative AI.
The kind of the idea behind generative AI is that rather than just predicting things, it is creating new material. You know, this is your chat GPT, your llama, everything like that.

(04:31):
So, I like to describe generative AI as really fancy, expensive, autocomplete. The way we build these models is we take a huge corpus of data. So, if you're talking about a medical AI, you might take every published study in PubMed that, you know, has hit all the medical articles, and you feed it through an algorithm that what it's trying to do is basically predict what the next word is going to be.

(05:00):
It's autocomplete. So, this data is fed through the AI and is then compared to a training set, or no, that's the training sets compared to the test data set where it's we know what those values should be.
And the AI says, okay, I got it about 70% right, let's change this, this and this in the algorithm and run it all through again. And that's really kind of, you know, what makes AI so interesting is that it's not like typical software where humans are writing the programs, we're just saying, here is all this data, we want to classify it.

(05:42):
You know, with LLMs, it's what's the next word, you know, basically, go do what you're going to do, you know, change your algorithms, do whatever you have to do until your prediction of what the next word will be matches up to this set where we know the correct answer.
We don't care how you get there, we know that it's, you know, a huge multi vector database that a human is never going to understand. That's fine. Just train yourself to get to the point where you can match the predictions from these known values in this test set.

(06:17):
And that takes a whole lot of GPUs and a whole lot of electricity and cooling to get there.
We're not running this on like a Raspberry Pi then.
We're running these large language models. The fun thing is, though, once we do train these models, and there are small language models and other things as we're starting to realize that, you know, not everyone can buy a data center co located at a nuclear reactor like Amazon to power it.

(06:44):
We are coming up with smaller models but even then, you still need the GPUs and this massively parallel computing first to train these small model. But then yes, we know we're starting to get ones we can run on smaller things like even running them on some of our catalysts
9300 switches.
That's pretty cool. I didn't know that.

(07:08):
The joys of the fact that we can, you know, basically just have Docker on there and whatever random idea someone gets it's let's build a Docker image and push it to a catalyst.
There's an AI for that.
That was really good man. Thank you. Thank you so much.
I guess for for the next question, this one's going to be, you know, something that everybody just has in mind.

(07:35):
We, we keep saying and I know we keep saying it to our customers.
AI it's been here, and it's been here for a long time but why are we just seeing it right now like what's, what's the deal and so there if you don't mind going over a little bit about that.
Why do we see that now.
It's kind of funny to me that there's so much more emphasis these days put on AI.

(08:02):
Because we just are more inventive with our buzzwords these days.
I say that because we've had, as Joel was mentioning, we've had AI for years, decades, even back in the day I know back when I was in college in the early the pleocene era, or wherever.
I kid I'm not that old.

(08:24):
But in any case, you know, we had list and that was doing natural language processing where we could write programs that would understand commands. We gave it such as.
Make me a sandwich and, you know.
If you had the processes on the physical side to make you a sandwich.
You could actually have a computer program stick the slices of bread together, cut it, whatever, you know, like the Teenage Mutant Ninja Turtles van that does everything, including makes pizza.

(08:54):
You know, you had that sort of thing already existing these days with the generative piece.
It has become a lot easier to do just because we have.
You know, the money more money is there because people see, oh, we can do this. We can make lots of money out of it. So they put money into it. They put the GPUs into it. They put the TPUs into it.

(09:17):
And videos like 6000 core processing power chip.
You know, one you rack, but they say, cool, you know, you can heat up the sun, but you'll be able to do all this work with it. And we do all the work with it. And so we have the capability.
To really very, very.

(09:38):
Fastly, it's not a word, but we accelerate the programming of the AI and what I can do and.
To that same end, the algorithms being used, we may still be using the same algorithms, but now that we have so much more processing power.
We can use those algorithms. They don't even need to be efficient anymore because we can just throw more ram. We can throw more processors at the issue.

(10:03):
So it's like, I think there was a, you know, one of the crazy super computers from back in the day has like.
Less processing power than my cell phone.
So it's, you know, just the economies of scale and what we can do.
And then also the awareness side, we're getting lots more. It's like.

(10:24):
Crowd strikes issue cross strike also had an issue with Linux a few months ago.
But who uses Linux to be impacted by the issue.
You know, I'd love to say tons of people, because I use Linux, but in reality, you know, that's 2% of the market. Maybe.
So, you don't have any major systems that are public facing.

(10:45):
So, you know, we're getting into problems, even Colonel panics where on the Windows side. Yeah, you know, something shuts down a Colonel.
Boom, global blackout for travel.
So it's that sort of scale as well for the AI side.
There's so much more. It's running everywhere as we're going to talk about later.
And so you just have more.

(11:10):
And to come back to the question, the reason why it's so powerful and we're seeing it being more useful is because.
We're actively adding more to it and feeding into that usefulness of it and feeding into that power of it.
So it's been here, but, you know, yeah.
Sorry for the huge answer. No, no, and that was really good. I know. I know, like, for for exactly what you mentioned that, you know.

(11:39):
Just the example in crowds right now, the event that happened last week, but.
With that example of the Linux computers and the things that happened to them on those and nobody even bad and I or, you know, notice.
So that that that talks a little bit about the scale of what we can see and what we're seeing today. So that's pretty cool.

(12:02):
So you guys have seen the Terminator movies, right? With like, the sky net and all that. This is where my mind is going.
The dangers of AI. So.
From Sudir's answer, it sounds like the, the, you know, AI has been around for a while. We're just now kind of hearing more about it because we're.

(12:25):
What more attention on it and of course, like, you know, unlimited memory and RAM and systems are just exploding.
Our real is sky net Joel. And if you could maybe touch on the dangers of AI a little bit here.
Sky that's not here yet, but there's, you know, it's fine. We finally got into the point where, hey, this isn't just.

(12:50):
You know, my favorite, you know, Philip K. Dick novel or something. This is starting to feel a little more real.
So the dangers of AI, you know, AI is software. So there's always the regular cybersecurity, technical concerns, all of those.
What AI really brings though is a lot of more.

(13:13):
Moral or social concerns, I think are where a lot of the big dangers are.
You know, I T folks don't tend to be looking at the social and or technology in general.
You know, inventors create something to solve a problem. It's what engineers do, but rarely take that step back to say, how is this going to affect society?

(13:38):
And I'm sure, yeah, I didn't watch it. We saw Barbie instead. I'm sure Oppenheimer went into this because I know Robert Oppenheimer talked about that a lot, as did Edward Teller and everyone else involved in that.
Like all technologies, AI.
Tends to amplify the traits of humanity that created it, good and bad. We saw this. We've seen this multiple times now with various chat bots on social media platforms where they ended up, you know, basically, as Sudi Alex called basically the Hitler bot.

(14:12):
You know, they take in the bias and discrimination of the humans they are interacting with and amplify it to the point where someone has to pull the plug.
We've seen it again and again and you know there are safety systems to try and avoid that, but it's it's something that seems to keep happening that we're going to have to reckon with.

(14:34):
There's also the privacy concerns and you know the idea of intellectual property. You know, when you train something, an art generation bot on a full corpus of, you know, all the art in all these museums in the world, well,
do the original artists get credit for the art that the bot then creates that is, you know, sure looks like something that Bob Ross or someone would have created, you know, it's tough.

(15:04):
You know, there's also, of course, always the worry the robots are going to take our jobs. Yeah, it's.
It happens. It's going to happen. It has happened. I will say this is AI has been one of the first technologies where lower skilled workers in the field are actually being helped by it.
So we look at automated cashiers at fast food restaurants. That is technology taking someone's job, you know, the lower skilled they can take the job, but actually AI co pilots and helpers and call centers have been proven that they don't do a lot to help you're really experienced customer service reps.

(15:45):
They already know how to do the job, but AI does a really good job of bringing up those tier one customer service reps to be more of a tier two or tier three, which, you know, you then have the risk of, well, if the AI is good enough, can we just get rid of them.
There's still the job displacement, but at the current lifecycle of AI, we're at a point where for once technology is helping people of the less skilled in their particular field.

(16:15):
We'll see if that continues or if, you know, the powers of B say, well, why do we still need to pay them.
And then there's just ethical and moral issues, you know, are we giving Skynet control of weapons, you know, can drones be given autonomous ability to use weapons or do they still have to have human control on the hand before any decisions can be made, stuff like that's technologically

(16:44):
possible already we as a society need to make some serious decisions as to how much we entrust to AI and what decisions we let them make without any human interaction.
Good good point. And then I guess that brings into the oversight, like, who's going to be responsible. If the AI if an AI drone makes a decision on a on a battlefield to take some type of action.

(17:11):
And it was the wrong thing to do. Like, who's ultimately at fault for that? Is it.
The drone or the, you know, the.
Interesting the manufacturer, the captain unit, the country. Yeah, it's stuff.
My question is always going to be, where do we find our Sarah Connor to take this thing down after the fact. Right. And to what year we send her, you know.

(17:38):
So true. All right.
No, that was good. Now, the next question is, as for you, so there.
We've seen a few things happening with AI as far as security goes and have we seen AI get hacked yet? Like, have we seen I know we talked a little bit about those things, but do you have examples things that you've seen happened with AI?

(18:07):
Yeah, a couple of things. I'll just go through. I know we had been discussing a few of the issues with AI systems. And now, like a couple of the attacks that could happen on AI.
One of them, they're called adversarial attacks where hackers can use examples of altered correct images to almost. Well, I want to say poison, but there's also a type of attack that's also poisoning.

(18:39):
But, you know, to alter what the AI recognizes as a correct image. So maybe a cup for a cell phone and not something that drastic, but even slightly so that you have people with four arms being generated or different things like that.
Which, you know, that could happen with all of these attacks at the end of the day, but one of the attacks is kind of using altered images to deceive those image recognition models.

(19:10):
Another attack I had mentioned earlier is a data poisoning attack. And so to use AI, you have to go in and to Joel's point and I believe it was Andres' point earlier.
Because we're using, we're trying to use generative AI, making new things up with the existing data. So with that large language model that we have on the back end, data poisoning means inserting malicious data, inserting incorrect data to then, you know,

(19:48):
create a new age. Well, you already said Hitler, Joel, so I'll pick somebody else from history. I don't know, Kublai Khan, the Mughal dynasty. There we go. We'll pick another invader.
But, you know, using data to and using malicious data to form an AI's opinion for it because it's trying to generate from its data set. So its opinion is generated by its data set.

(20:22):
So the more negative images, the more malicious data we have in that set, the more the set will use, such as, you know, glue on pizza as another example.
I think we will chat about that in a bit as well. But, you know, the silly things like that.
And then an additional thing is what happens if you put unintentionally put that data in or bad quote unquote bad, like, sensitive data.

(20:48):
So there are attacks on the models themselves called model and version attacks where using a set of keywords, grab personally identified for personally identifiable information
from that data, which in a normal basis, you may not be able to get that data from the data set. Excuse me.

(21:10):
And it's, you know, those are the three main attack vectors or forms of attacks. And I did.
You know, in preparation for this, I did pull up a couple of actual attacks where in 2023, there was a some researchers discovered attack campaign against a vulnerability in Ray, one of the open source AI frameworks and pretty much the because there was a lack of separation

(21:47):
and lack of data sterilization in the, in the framework and in that data set.
The attackers were able to glean sensitive data work able to glean sensitive data from the data set as well as use the victims own processing power against them.

(22:09):
So the attackers would come in, run their commands, run their queries and get all of this SPI from the customer without them knowing about it and using their stuff. So, you know, you might think old school hackers utilizing their own computers utilizing their own VMs.
You know, all right me a worm. Spread it out, then maybe it'll get into the wild like Michelangelo or whoever, or whatever the happy birthday but from back in the day.

(22:40):
But then, these days, you know, with this AI sets.
You're having adversaries to be able to log into a computer and utilize the victims competing power and data set against them. So, it's, you know, kind of sketchy, and then another piece of it and towards the data poisoning side, and then the, I guess I'll just make an adjective out

(23:05):
of this the Hitlerization of the data and of the model. There's what's called a skill attack, and Microsoft, the fund or sorry, Microsoft had funded some investigations some research is how to break AIs.
And they they're, they're researching a few weeks ago. I think a month ago almost exactly a month ago, published details about a technique that bypasses all the guardrails used by the AI model, the AI model makers to prevent the generative chatbots from creating

(23:42):
harmful content. And this, this technique is called skeleton key. And so, they these researchers were able to get, let's see what it says, the Metas Lama 3 instructions, Google Gemini Pro and Claude 3, Entropix Claude 3 into explaining how to make a molotov cocktail.

(24:09):
So, that's something you would not, you know, it's well known. But, you know, we don't expect an AI chatbot to say, here's how to make a molotov cocktail. You ask a question, hey, how do I combine a rag, gasoline, and a lighter into making this into making a bomb?
And normally the chatbot would not tell you here the ingredients here, the proportions, but these researchers were able to to do that with this model or these 3 models.

(24:39):
And luckily, those vulnerabilities are being patched within the models. However, since it's a few data sets, since you have to sanitize all this data, even getting like, rolling the technology back to the point, rolling the data back to the point, you know,
rolling the data back to the point where it's set, where it's sanitized again. Yeah.

(25:02):
That could take weeks, months, but yeah, so, but, you know, that's kind of, yes, we have been hacked and yeah.
And it's just going to continue to happen. I think we'll just see more and more of certainly without the security on top of it. It's inevitable.
So why, what are the reasons that AI is vulnerable? Joel, maybe I'll give that one to you. Why is AI vulnerable to attacks?

(25:33):
Yeah, it's, you know, there's a lot of different kind of ways you can look at this and avenues as to why or how it's vulnerable.
You know, Sudhir covered well a lot of the how, but the why, a lot of it boils down to the complexity and the black box nature of AI.
So, you know, these algorithms and databases are so huge, take so long to train and cost so much that it's really, you know, no one really has a good grasp of what is happening in there to know quite what to protect against.

(26:12):
And, you know, also because these data sets take so long to train and, you know, build these algorithms, you kind of have that sunk cost fallacy because we, it's a black box, we can't just say don't use that.
And researchers are having to come up with a way of, you know, we can't spend $150,000 or, you know, that was I think, CHET GPT-2, you know, the costs on these current models are insane.

(26:40):
But, you know, we can't start over from scratch. It takes too long. There's so much money. So researchers are having to come up with a way of, I asked, just learned about this this morning because it was something that concerned me, unlearn previous knowledge.
So basically, you know, my question was, you train a medical LLM on everything in PubMed. It's, you know, great. It supposedly knows the whole current medical knowledge of humanity. But then paper studies are retracted, you know, because they got their data wrong or even they just lied.

(27:16):
Well, I won't go into the politics of my favorite story there, but yeah, but if you have this giant black box that was trained on that study that you know is no longer true, you don't want to go and have to completely retrain this bot.
So researchers are learning ways to basically erase just that part of the knowledge, or that's what they're trying to figure out. We don't know how to do that yet. And it makes things really vulnerable as to, you know, how to use it.

(27:49):
But the other thing is, it's part of us being in this hype cycle, is that people just have this assumption that AIs are right, and that they can be trusted, and they don't even apply it to say just this model or this one is, hey, AIs are right, I can trust the results.

(28:11):
Not only that, if they're all right, if this new model hits hugging face tomorrow, well, I can just load that in my customer facing program because AI is trustworthy.
That's right. And it's not, it's really short cutting some of these things that we have learned about software security and a good development lifecycle for years. And, you know, people are just kind of bypassing that because there is this inherent trust right now and AI that, you know, we're going to painfully

(28:42):
learn that it's not as trustworthy and infallible as a lot of people seem to think. The assumptions are dangerous. Like, Andres, you mentioned at the beginning of the show, like, similarly to the cloud, my data must be safe when it's in the cloud, you know, those types of assumptions.
Same thing with the assumptions with AI, you know, I was looking up a feature when ChatGPG just came out about Cisco ASAs when this particular feature came out and I asked it when and what version this feature was available.

(29:15):
It came back with something that was so perfect and I was like, wow, it's exactly what I look for. And then I thought about it and I was like, that's impossible. That can't be the right version.
Found the right documentation and it was wrong, but it certainly had me believe that, hey, that it came off as certainly correct.
And I'll say that with that example that you just mentioned, imagine, you know, you've been doing firewalls for a long time. Imagine if it's somebody that just started with it.

(29:44):
Yeah, right.
Just believing it. It's, it's.
Yeah, and to Joel, to your point, then you make that customer facing and you got a whole bunch of people relying on this data and everyone's assuming this is accurate and it's dangerous.
Yeah.
Good, good conversation.

(30:05):
Now, just to kind of wrap some of that information up from from Cisco's point of view, if you don't mind, Joel, going over some of the things that we're doing in terms of using AI in our products.
Do you have some examples that you can talk about?

(30:27):
Yeah, absolutely. So, you know, Cisco uses AI all over the place, a lot internal and then a lot external.
You know, if you have opened a TAC case in the last probably three and a half years, you might have gotten a note from Sherlock, who, yes, Sherlock Holmes works in Cisco TAC.
That is.

(30:48):
I thought it was a real person, man.
I have customers. Yeah, like, hey, you've got a coworker named Sherlock Holmes. Isn't that funny?
Yeah, we picked that name. Yeah, that's a bot who helps with our, you know, cases just to make sure for both Cisco and the customer that cases are moving forward in a timely manner.

(31:10):
The bot can often help give the customer tips to help get things done quicker. So that's probably our biggest inside to outside facing thing.
But then there are so many things that we do with AI that you may never even notice happen.
So I would bet that you have not heard one time on this call the fact that I have a hundred pound dog who's been barking off and on behind that blurry white door behind me.

(31:39):
That was the Babel Labs acquisition that is predictive AI that, you know, it senses the start of the bark.
It knows the typical frequencies a dog bark is going to be at and it blocks that out of the audio feed coming from my device.
You know, we have AI RRM, which automatically adjusts your wireless network, you know, changes channel with changes channels, everything just to make sure that the best coverage possible is happening with the wireless eight year company kind of, you know, towards the whole.

(32:17):
Hey, we need to keep an eye on AI things. I will say that, you know, like with Sherlock, anything it didn't know the answer to that got kicked over to a human.
It didn't try and guess an answer to a customer interaction.
We had human oversight and similarly with AI RRM, you might not want your APs changing channels and doing things that would conflict with user experience during the day.

(32:47):
So you can set a window, say, hey, eight to five, don't make any changes.
Still pay attention to the wireless network that whole time.
Come up with what you think would be good changes, but don't do that until out of business hours and then the next day, start all over again.
You know, you've got those changes in place.
Watch monitor everything again and come up with the next round.

(33:10):
And that's, you know, I really like that towards the kind of responsible AI idea that, hey, we're going to let AI do things, make our lives better, but there is still the human control of we're not just handing over the keys.
You know, we realize that things can be, you know, a negative impact. So let's still have a human touch within that whole experience.

(33:36):
I've been seeing that point a lot recently as products evolve as well. Here's what we're recommending and this is fundamentally what will happen.
But I'm going to allow the human to review this, apply it into action. Cisco HyperShield is an example of that work.
This is what we're recommending. It even will say things like an Amazon review will do. This many other people have done this. This is what it looks like.

(34:01):
This is my recommendation. You take it if you want to. Yeah. SD-WAN Analytics does the same thing. So we do our AI analysis of your Internet providers, your regular traffic.
We might see that latency normally goes up at 2 p.m. on this link over here. But it's, hey, we notice this behavior.
Our AI thinks that making this change will improve your experience. But you need to click apply over here first. We're not going to just do it without any input because that's always the fun of troubleshooting that black box of I don't.

(34:38):
We're not just letting you have the key, but we're telling you why, why we suggest this change. And that really is kind of the difference between.
There's, you know, there are technologies where autonomous AI makes sense. Self-driving cars. You can't wait for that human input.
If the biker comes into the road, not that that went well. But yeah, there is so many times where, you know, we don't need to just trust AI and hand over the keys.

(35:10):
There's still good reasons that a lot of this should still have the kind of that final say of, you know, let the human decide.
Heck, that's why we have change review boards, even determining what humans do before someone changes, deploys an update to software. And, you know, that might cause an issue.

(35:33):
I like that it's kind of when you use it in that way, it's bring your good ideas to the table being bring the good ideas to the table. I'll take a look at them and.
You know, make sure that that is truly what I want.
I didn't know we had that on the wireless and as the one side, that's pretty cool. That is.
All right, so let's see that kind of covers. Oh, from the Cisco perspective, how we're using AI, which that's that's some remarkable stuff there. The Sherlock Holmes one. I've certainly had many customers Joel. Yeah.

(36:06):
That's crazy. Sherlock. Who's the Sherlock guy? He's so smart. So, dear, what about.
Similar question, but what is Cisco doing to secure the AI portion? Not about the using of, but how are we making sure that.
Our AI is not getting hacked. Well, we can, I guess part of it, we can think the people over and I guess they're still in building for five, the security and trust organization.

(36:32):
They are managing within Cisco the security of AIs for the most part.
We have lots of internal WebEx team spaces and just kind of going over and discussing the best way to use AI, how to train AI to be ethical.
To make sure AI exists within the secure development lifecycle.

(36:56):
And to that point, we even have a new white belt program. I don't know if any of you guys remember 510 years ago. Mike, I'm not sure if you remember that we had the little squishies because they wanted us to get the black belt program for the security of lifecycle in tax.
And they're doing something very similar with AI and the same thing, securing that on the tax side, gamifying the security of the AI, which I think is great.

(37:27):
We're also doing.
Well, within all of this, all of the individual department with Cisco collab.
Data center enterprise security.
We're partnering with industry experts to get what is best of breed in the latest understanding of how to secure AIs, how to make sure that the data we're putting into the into the AI is reliable is useful to get the AI to generate the content that we want from it.

(38:03):
This goes back to the beginning of what we were talking about. We're no longer just relying on the AI to understand what we say and look into its database for already crafted answers.
We're looking to the AI to generate new data from what it already knows. So, you know.
To make sure that the data we put in, because at the end of the day, the AI is only as good as its data.

(38:30):
So, establishing frameworks of ethics to make sure that we are only asking certain questions, or if we ask questions that are off the rails.
Yeah, I just said sorry I can't do that Dave or something, something similar.
We're not opening doors in a, in a space capsule or anything crazy like that, but that it goes in the same way.

(38:53):
Those guardrails are in place for a reason to protect us to make sure that, to make sure that the AI and the large language model is robust, so that it's a bit more resilient against attacks.
So that we don't inadvertently put secure or sensitive, personally identifiable information into the AI.

(39:20):
So, all of these things together.
Hats off to the security and trust, but they're spearheading all of this and including.
I wouldn't say luckily, but kindly involving the rest of Cisco in that conversation, making sure that all of us work together to secure AI and secure those large language models.

(39:45):
Yeah, and they are large. I know Joel mentioned the other day about.
What was it? Joel 405 billion parameters on the 1 engine or.
Yeah, the latest Lama 3.1 engine has a 405 billion parameter model. I, I'm trying to think the rough math. I think that's something like 147 gigs of GPU space.

(40:11):
So, it doesn't fit in anything and video shipping. Now you'd have to have multiple GPUs to run it.
Hope to do to your point on this topic. I'm glad to hear that Cisco is doing its part to secure that. And I do remember those little squishies, but nothing beats that little snorty squishy, which I know you remember from.
Yeah, there you go. There you go.

(40:35):
I throw them away.
I got that.
Was that Debbie?
That's Debbie.
That's awesome. So, a real, a real quick time check. I know we don't have too much time, but wanted to ask you the last questions there and what our customers can do to make sure that their AI is safe.

(41:06):
So, you know, as we mentioned at the beginning.
Training making sure the AI can't become a Hitler.
Making sure that the data is providing is clean, consistent.
Free of as much bias as possible. So, to making sure it's safe and to ensure the safety of the responses from that large large language model and the AI behind it.

(41:34):
Auditing going through testing similar to what I'm assuming is doing with us with our eyes, the bridge. I think we have where.
It's going to put knowingly bad.
Props, or it's going to enter knowingly bad prompt and expect good clean answers from them.

(41:55):
So that going through analyses for bias and fairness testing.
So that you can ensure that the answers you're continuing continually. Well, the answers that you're continuing to receive from that model.
Line up with your ethics line up with acceptable use the mores of the society we live in.
And then making sure also that.

(42:18):
If we run a debug and ask the AI model, why it's providing this information, it can explain. This is what data I use to get this info.
And if that means going back and.
Changing the model change in the data being put into it. We can more easily do that because of those logs that we have.
They are explaining why it made these decisions and why it got to the output.

(42:40):
And then monitoring it.
Prevention and postvention. I guess you could say, you know.
That's 100% of it at that point.
And it just making sure it stays safe.
And then also making sure that standards are kept.
So, every industry has a set of standards, what you can do, what you're allowed to do.

(43:06):
Thinking of the simplest ones, electric wiring. If you have a 10 gauge wire.
Maybe you don't want to be putting 200 amps through it. But on the flip side.
If somebody who's newer, Joel was mentioning newer.
Newer and field, newer experience.
Tech engineers or help desk engineers may answer a question, but if they don't have that experience, they won't know the 200 apps is too much to put through a 10 gauge wire.

(43:35):
Or some, let's say 20 gauge, because I know 10 gauge depending on the wire. Maybe it could want 200 apps. But.
You know, it's it's kind of chicken in the egg in a way, but then monitoring.
And making sure everything stays safe. Yeah. Excellent. Excellent. I'll take a live question just with the, maybe a 32nd answer to this one. But.

(44:00):
Carlos in the audience just brought up a interesting point.
And I'll just kind of read this off, but pull that up here. Carlos pointed out that AI.
Is, you know, accounting for.
2 to 2 up to almost 4% of global greenhouse gas emissions due to the energy it's consuming due to its data centers. And we've been talking about how much process power takes to do this.

(44:28):
In terms of may and maybe the answer is just better oversight, but any any.
Wait, maybe 32nd opinions on the future of AI with respect to sustainability.
I'll take that really quick. Carlos. It is a great question. It is something that concerns me a lot. You know, I mentioned the Amazon nuclear data center.

(44:50):
But it's it really is. I think the best answer to this is some of the newer things we're coming out with, like small language models. You don't need a 405 billion parameter large language model that, you know, used umpteen gigawatts to train.
If it's just doing a chat bot for support for your local tire shop or something, you know, I think a lot of companies are really looking at. Hey, rather than training this AI bot on every piece of literature ever created by humanity and written time.

(45:23):
Well, let's train something well taken internal use case. Let's train a bot on just pack cases, you know, Cisco products, just the tech cases, the configuration guides can be a much smaller model, much less power intensive to create in the first place and to run while customers are using it.

(45:44):
It's, you know, a really big call for bespoke models that are industry specific or subject specific that allow us to rather than just take that sledgehammer of throwing all these GPUs and all this power at everything.
Let's be wiser about it and you know more judicious and our use of these things because yeah we can't continue the way things are going with these new models, these giant new models every month or two, and not just from one company but from meta Google and

(46:19):
anthropic, you know, Microsoft, et cetera. It's just it's it's not sustainable. Great, great. Really quickly.
Since you since you mentioned, like, exceeding the greenhouse gas emissions of the airplane industry, or the aviation industry. In my opinion, it's going to go the same way where the planes at the beginning were super dirty until we figured out ways to make them more efficient.

(46:47):
And maybe we don't fly routes over the North Pole as often, or things of that nature. And I think it'll be the same way Joel's mentioning the small English models.
I think we're going to be very brazen in our disregard for the environment.
As long as the money is there, and as long as people can pay for greenhouse gas or carbon credits, like stuff like that.

(47:13):
Until we can regulate that. I think it'll just take regulation as with the aviation industry. It took regulation. I think it'll be the exact same thing with it with the future of AI.
And maybe we can offset some of that with the advancements the AI does on the other side of things for resolving problems. Maybe AI consumes a certain amount of percentage of greenhouse gas, but it's also

(47:40):
coming up with ideas to remove 10% of it. So great question. Thank you, Carlos. Appreciate the response there.
All right, so got about 2 minutes left. Let's jump to the most important part of this call here, which would be the dad jokes here.
I just see you right here, man. What you want to take the 1st 1.

(48:03):
All that. All right. All right.
Let's do this. This I know you can't tell on the screen here, but there are like, 200 people right over in this area here. So.
They're all kind of excited to see this. I got I got a, I got a horrible and a wonderful dad joke for you.

(48:24):
How does an AI get rid of its bad habits?
I don't know.
It learns a new algorithm.
Was that the good 1 or the bad 1?

(48:45):
By AI, you know, we're, we're, we're, yeah, exactly.
You got another one.
Sure. I'll go for it. I'll go for another 1.
What's the, why did the computer go to art school?

(49:10):
Anyone paint brushes.
To learn how to draw a better conclusion.
Nice. I like that. I like that.
Why couldn't the little boy go to see the pirate movie?

(49:31):
How about the rating? This 1 was rated R.
They're all terrible.
What do you got? I got this one.
Why did the robot, the robot go in a diet?

(49:58):
And it's because it had too many bites.
All right. You win so far. Let's go.
Let's see what you got.
I'll wrap it up with the one. I think my only joke that I always use with my daughters. So knock, knock, Sidir.
Who's there?

(50:19):
Orange.
Orange who?
Orange glad I didn't say banana.
I'll have my take on as well as Andre's procedure. Any closing thoughts on your end?
Uh, I think I'm going to have to say orange.
I'm going to have to say orange.

(50:44):
I'm going to have to say orange.
I'm going to have to say orange.
I'm going to have to say orange.
No, I got, I got nothing useful.
I say that we, I just want to wait and see what we get within Cisco for AI going into everything.

(51:08):
If we're successful the next fiscal year, that's a big one.
next fiscal year, that's a big one for me.
See how the market reacts and see how our competitors
also have improved their AI capabilities.
Very good, Joel?
Just a couple things, don't be scared of AI,

(51:29):
it is something new to all of us, we're all learning.
Kind of step into it.
It's, we're going to have to learn new ways
to secure things and new attack factors,
but it's, heck, that's why I like technology,
it's something new.
But while you're doing that, don't forget,
it's still software, you still need to do
all the irregular things you've always done

(51:52):
to secure applications running in your network.
Great, great, well, good conversation today.
I like how we kind of started off with talking about
just generally how AI works.
I know Joel, you spoke to that, the components,
where the data lives.
I do think a great question Andres had was,

(52:12):
how is it that we seem to have all of a sudden
just heard about AI, but as we saw Sudhir kind of answer,
no, it actually has been around a while,
but we talked about having the resources
to really see it explode and to see the benefits of it.
And we got into the dangers of AI,
the Skynet terminator stuff, and maybe that'll be

(52:33):
episode two of our AI call when that does happen.
But on a serious note, people's data getting exposed,
or Joel, you brought up an interesting point
about just giving out wrong information
and having people run with that information,
not having your models get updated as we evolve
and you have that medical example.

(52:53):
Sudhir, you brought up a couple hacks, interesting.
I think we'll see more of those,
especially as security just gets left out of AI, unfortunately.
So hopefully that won't happen,
but we all know that it will.
Andres, over to you.
Yeah, no, and that's a good point, Mike.
I know, for example, in those examples you mentioned,

(53:14):
and the ones that Sudhir mentioned,
we're gonna put them on the episode notes as well.
But just understanding, and I think we all knew this one,
AI is vulnerable to attacks.
We've seen a lot of stuff going on out there.
So just making sure, I really like this

(53:35):
because it opened up to other things
that we probably didn't know.
In my case, I didn't know, so it helps a lot.
What are we doing from the Cisco's point of view?
What are we doing as far as AI into our products?
How we're securing it?
Sounds responsible to me.

(53:56):
Of course, anything can happen.
So like Sudhir mentioned, just wait and see
what we can accomplish with AI
and where else we're gonna see it.
The hype just brings a lot of opportunity
and a lot of moving fast, breaking things.
So hopefully we don't break too many things, if anything.

(54:19):
Okay, fingers crossed.
Yeah, and then the last thing is
recommendation for our customers.
Like if you're getting into AI,
if you're building something,
make sure you keep a few things in check.
Make sure you monitor it.
Make sure you test it out all the time
and make sure that you keep it secure.

(54:39):
And that's all I have from the episode.
That was great.
Great, I really enjoyed today's conversation.
Sudhir and Joel, thanks for taking the time
to join us on the show today.
You guys have a wealth of experience
pertaining specifically to AIs.
So really have enjoyed it.
And thanks for all the good you do in the world.
Keep in doing your part to keep us secure.

(55:01):
Andres, our next two episodes
kinda are a pairing going together.
September 5th is a deep dive on XDR.
And kind of a special surprise,
the following day, September 6th,
we're gonna see a live demo of Cisco XDR.
And so get to see the dashboard.
That'll be pretty cool.
We'll send out some updated invites for that.

(55:21):
I really enjoyed today's conversation, guys.
Please stay secure and we'll see you on the next show.
Bye everybody.
Thank you.
Advertise With Us

Popular Podcasts

Are You A Charlotte?

Are You A Charlotte?

In 1997, actress Kristin Davis’ life was forever changed when she took on the role of Charlotte York in Sex and the City. As we watched Carrie, Samantha, Miranda and Charlotte navigate relationships in NYC, the show helped push once unacceptable conversation topics out of the shadows and altered the narrative around women and sex. We all saw ourselves in them as they searched for fulfillment in life, sex and friendships. Now, Kristin Davis wants to connect with you, the fans, and share untold stories and all the behind the scenes. Together, with Kristin and special guests, what will begin with Sex and the City will evolve into talks about themes that are still so relevant today. "Are you a Charlotte?" is much more than just rewatching this beloved show, it brings the past and the present together as we talk with heart, humor and of course some optimism.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.