All Episodes

September 17, 2024 • 39 mins

Welcome to "AI or Not," the podcast where we explore the intersection of digital transformation and real-world wisdom, hosted by the accomplished Pamela Isom. With over 25 years of experience guiding leaders in corporate, public, and private sectors, Pamela, the CEO and Founder of IsAdvice & Consulting LLC, is a veteran in successfully navigating the complex realms of artificial intelligence, innovation, cyber issues, governance, data management, and ethical decision-making.

Curious about the future of artificial intelligence in the realm of cybersecurity? We welcome Greg Sisson, Co-founder and Chief Operating Officer (COO), CI-Discern,a seasoned cyber executive with a remarkable 40-year career in public service, including the US Army and various government agencies. Greg takes us through his transition from government roles to entrepreneurship and shares the inspiring motivations behind co-founding CI Design. Learn how understanding an organization's mission and its stakeholders is crucial in navigating the risks and seizing the opportunities presented by digital transformation. Personal milestones, such as the birth of his granddaughter, also played a significant role in shaping Greg's career decisions.

Discover why data is the crown jewel of the AI era, surpassing the value of hardware and software. We discuss the pivotal roles of Chief Information Security Officers (CISOs) and Chief Data Officers (CDOs) in protecting this invaluable asset. Hear Greg's insights on the importance of a collaborative approach between these roles to safeguard data effectively. Amid the rising tide of data breaches, maintaining consumer trust through timely and transparent communication is more important than ever. Don't miss out on learning about robust data loss prevention measures that every organization should prioritize to prevent breaches from becoming the norm.

Get ready to explore the dual-edged sword of AI technologies, from deep fakes to adversarial AI threats. We delve into the complexities of AI governance and compliance, emphasizing the need for continuous monitoring and human oversight to maintain ethical standards. Greg sheds light on the critical intersection of software bill of materials (SBOM) and AI, and the importance of securing language models against adversarial attacks. Finally, we discuss the shift from being the office of "no" to the office of "know," emphasizing the importance of collaboration, education, and a balanced approach to AI's benefits and challenges. Tune in to gain valuable insights and actionable strategies to navigate the evolving landscape of AI and digital transformation.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Pamela Isom (00:00):
This podcast is for informational purposes only.
Personal views and opinionsexpressed by our podcast guests
are their own and not legaladvice, neither health tax, nor
professional nor officialstatements by their
organizations.
Guest views may not be those ofthe host views may not be those

(00:32):
of the host.
Hello and welcome to AI or Not,a podcast where business leaders
from around the globe sharewisdom and insights that are
needed now to address issues andguide success in your
artificial intelligence anddigital transformation journey.
My name is Pamela Isom and I'myour podcast host.
We have another special guestwith us today.
That's Greg Sisson.

(00:52):
Greg is a cyber executive.
He's a US Army veteran.
Thank you for your service.
He is co-founder and chiefoperating officer of CI Design.
He's a friend.
He's a colleague that I workedwith at the US Department of
Energy.
Greg, welcome to AI or Not.

Greg Sisson (01:15):
Thanks, pam, it's awesome to be here.

Pamela Isom (01:18):
So let's talk a little bit more about you.
Tell me about yourself, tell meabout your career journey, how
you got where you are today,what caused you to dive into
entrepreneurship.

Greg Sisson (01:33):
Great.
I'll try to just highlight thewavetops here.
It's been a while, but Iactually spent almost 40 years
in public service.
I started out enlisted in theArmy and then went to officer
candidate school and got acommission as a communications
officer in the Army, retiredfrom the Army in 2004.
I was in a trainingorganization at that time.
It was called Joint ForcesCommand and we were in what they

(01:55):
called the J-7, which was theJoint Warfare Center be able to
retire and go straight intogovernment service and serve in
a technical role there helpingthe mission rehearsal program,
preparing large joint staffs togo to Kabul and Baghdad.
And I did that for about 10years and it was an amazing
opportunity to continue to giveback to the mission, support the

(02:19):
mission and training to thosestaffs.
Then I did a short stint in thejoint IO range as the Deputy
Program Manager.
The Joint IO Range is a DODclosed loop range to test
offensive and defensivecybersecurity capabilities.
And then I was given an amazingopportunity to join the Defense
Senior Leader DevelopmentProgram.
It was a two-year programdesigned to prepare government

(02:40):
civilians across the Departmentof Defense for senior executive
service, and so part of that wasspending a year at the Navy War
College up in Newport, rhodeIsland, and then I came back to
the Pentagon and helped todevelop the DOD's first cyber
strategy, and then went up to anorganization called Joint Force
Headquarters DODIN, which isthe defensive arm of US Cyber
Command, where I was the deputydirector of operations, or

(03:02):
deputy J3, and then the chief ofStaff of that organization.
And then, in 2018, I moved downto the US Department of Energy,
where I helda number of rolesthere, but culminated my
civilian career as the CISO atthe US Department of Energy,
which is where we met.
Yeah.
And so, as far asentrepreneurship, I actually
left government service in 2022and decided to go into industry,

(03:26):
and I joined Ernst Young as aconsultant, working with energy
companies and manufacturingcompanies to help their CISOs
with their cyber programtransformation.
It was an amazing opportunityand that's where I met our
founding partner, our managingpartner of CI, discern Dylan
Diefenbach, and led the cyberand energy practice in the

(03:46):
Americas.
He was a partner at EY and latelast year, our granddaughter was
born and our daughter actuallylives overseas and our
granddaughter was born inDecember of last year and I
really started to think aboutwhat was important and what I
wanted to focus on in the nextfive or 10 years, and so Dylan
and I started to talk, and themore we started to talk, we

(04:08):
settled on this idea of whydon't we go out and take the
experience and the people weknow and do something that we
want to do and focus on ourfamilies and focus on things
that we want to work on and goout there and help companies?
Really, that's where the word,the name of the company, this
was all Dylan that came up withthis, but it's really

(04:29):
interesting and it's a greatlead-in to the discussion we're
going to have today because theword discern.
It's really about discerningperception versus reality and
helping companies take adiscerning look at risk.
And so, as we start to talkabout AI and AI risks and
benefits, it's all about takinga discerning look, and so that's

(04:51):
where the company name camefrom.
That is my journey toentrepreneurship.

Pamela Isom (04:57):
Well, congratulations, I'm excited for
you.
I did the same thing similartrail but I just always thank
people for their service, so I'mgoing to do that one more time,
because your service is whatkeeps us safe, and even in the
cybersecurity realm, our servicein the cybersecurity realm

(05:18):
keeps us safe.
So I really appreciate it andthank you again.
I'm fascinated with what you'redoing today, but I'm fascinated
with your background because,also, you were at DOD, you
served in the military.
You mentioned Kabul.
I am excited for you and thegrandbaby, so congratulations on
that.
I know that feeling too, and soI want to go into a little bit

(05:42):
more discussion around discern,because I like how you said that
.
So let's go deeper into discern.
How do you discern in thisdigital transformation era?
What are some examples of howwe would discern risks and how
we would discern howorganizations can be successful
and effective?

Greg Sisson (06:02):
I think it's all about and this really applies
across the board it's aboutunderstanding the organization,
understanding the mission of theorganization and how, as
cybersecurity professionals ortechnical professionals, what
our role is in enabling thosemissions.
And in order to do that, wehave to talk to the stakeholders
.
We have to get down to each ofthe different parts of the

(06:25):
department or the agency or theorganization and really
understand what they're tryingto do.
You can't put in place securitypolicies, you can't put in risk
controls and risk mitigationsteps without understanding the
potential impact of those stepsor those controls on their
mission.
The other thing that we have todo is really understand and

(06:46):
have good asset visibility, andwhen I'm talking about assets,
this will bring through with you, you'll understand.
This is I look at data as anasset.
A lot of people look athardware and software as their
big asset inventory, but I thinkdata is even more important as
assets and having a goodunderstanding and inventory of
your data assets, but also beingable to classify that data so

(07:09):
that you understand how toprotect it, where it's located
and who has access to it.
And by doing that, then you canstart to have a discussion
around AI and how AI couldpotentially be a benefit or a
risk, depending on how employeesor how the organization wants
to use certain tools.

Pamela Isom (07:31):
Yeah, I agree with that.
You knew I was going to agreewith that.
So biggest concern I have is dowe understand the assets?
And I always talk about theassets, and I do think that data
is one of the critical assetsthat can make or break an
organization.
From a privacy perspective,you've got to know who has

(07:52):
access to the materials, who isable to get access because of AI
, and then who's using the datato advertise.
So nowadays, because of cookiesand all this, people are able
to get access to information,use that information and make
additional decisions that youprobably didn't even think about

(08:12):
, like advertising.
So I do think that the wholedata ecosystem is worth securing
and protecting and in order todo so, you have to understand
the lineage, you have tounderstand the actors, you have
to understand how they couldpotentially use it.
It's a bit more complicated nowthan I think we thought about

(08:37):
in the past.
I think it was always there,but AI has elevated the concerns
and the needs for a focus ondata and I think, from a CISO
perspective which we're going toget into but from a CISO
perspective CISOs don't alwayssee data as their primary
responsibility.
From what I have beenexperiencing in the past CISO

(08:58):
think that the data is theresponsibility of the chief data
officer.
So now I think that this hasbrought to bear that it is the
cybersecurity leaders thatliterally protect the data, but
the data officers help usunderstand where the data is and
the lineage of the data andwhat are some good practices to

(09:19):
put in place to help to protectit.
But the two work together.
What's your perspective on that?

Greg Sisson (09:26):
Absolutely.
I mean, that's a hand-in-gloverelationship and folks,
organizations don't have a chiefdata officer, so who is going
to do it if the CISO is notlooking at it?
And if you look at thecybersecurity triad the
confidentiality, integrity andavailability- all of those
things are based on data.
All of those things are talkingabout the confidentiality of
data and maintaining theintegrity of data, and then the

(09:48):
availability of data so thatpeople in the organization can
use it for the purpose that itwas intended.
So yeah, I think when we look atit from that perspective, cisos
absolutely have aresponsibility to protect it,
but it's a huge benefit to havea chief data officer, somebody
that really understands data andcan help you from a management
perspective and helping toeducate the users in the

(10:10):
organization about how toclassify their data and then,
working with the CISO, how toput controls in place around
that data to protect it, butalso make sure that it is
available, if needed, to thirdparties and to other
organizations needed.
So I think it's a symbioticrelationship and an important
relationship, and I think thatchief data officers their

(10:32):
position in the organizationprobably needs to be raised up
now with AI and the implicationsaround potential for data loss
and those kinds of things.
I think organizations need togo back and re-look at their
data loss prevention programsand the controls they have in
place around data loss, now thatwe have tools like GPT and

(10:52):
those kinds of things, wherethere is a greater potential for
data to leave the organizationwithout the CDO or the CISO
knowing about it.

Pamela Isom (11:01):
Yeah, I agree with that too.
So the data breaches I was justin a discussion recently and we
were talking about theelevation of data breaches and
how it's occurring more and moreand that sometimes it's
becoming so common untilorganizations are looking the
other way.
And there was an experiencejust recently with one of the

(11:22):
mobile carriers, and we foundout about it a little late, if
you ask me, a little late whenfinally informed the public
about it, and so I am concernedthat we need to pay attention to
data breaches and informconsumers that this has

(11:43):
transpired sooner in the process, and so that's one of the
concerns that I have and I thinkthat I get it, greg that this
happens because you want to tryto get your fundamentals in
place first, so when peoplestart calling about their
information, that you have aresponse that you can provide to
them.

(12:03):
But what's concerning is it waslate when we got informed, and
so how do we protect ourselves?
So, yeah, you have theinsurance that you're going to
make available.
My concern and what I'm hopingthat the CISOs and the folks
that are involved with digitaltransformation in general that

(12:26):
we understand that informationtravels quickly and
organizations don't need to lookthe other way.
We need to be transparent, weneed to be forthright and let
consumers know that thesebreaches have occurred, and
breaches should not be the norm.
So we need to be doingeverything we can to prevent
these from happening, to monitordata loss, and that is why I

(12:49):
think it's so good to have youtalking with me today, because
that's one of the concerns thatI have and one of the things I
really am concerned that thebreach of information is sort of
getting.
Oh yeah, that's normal and itshouldn't be normal.
What's your perspective on that?

Greg Sisson (13:07):
Absolutely.
You talked about gettinginformation out and being
transparent.
I was always a firm believerand was trained that it's much
better to get facts out quicklywhen you have a breach, because
otherwise people are going tomake stuff up and they're going
to make assumptions and thenyou're going to have to recover
from those assumptions.
It's just going to be harderfrom a public affairs
perspective, from a potentialreputational loss, reputational

(13:30):
risk perspective and everythingelse.
So, yeah, I'm a firm believerin getting facts out early and
often trying to get ahead ofperceptions and those kinds of
things, because it's just gonnamake it worse for you In the end
.
You're gonna put more work,you're thinking, you're giving
yourself time to prepare and allthis, but in the end it's gonna
be worse and you're gonna takemore time and more resources to

(13:50):
respond to things you didn'twant to respond to, versus what
you could have done by justputting out facts early, and
often to the media in the caseof a breach.

Pamela Isom (14:00):
Okay, so here's a question.
So, by you being a seasonedCISO, tell me what you see as
the benefits and risks thatCISOs face.
As the benefits and risks thatCISOs face and we talked about
it a little bit, but I want togo a little bit more into the AI
era, this digital AI era sowhat do you see as benefits and

(14:20):
what do you see as risks?

Greg Sisson (14:23):
I think the benefits are very apparent.
There are tremendous benefitsaround contracting.
There's a list of things thatare hugely valuable to
organizations to not only takehigh demand, low density parts
of their workforce and relievethem of a lot of
responsibilities.

(14:43):
That can easily be put into agenerative AI tool or an AI tool
that will help them do tasksand relieve them of those tasks
and enable them to focus onother things, and so I think
that's hugely valuable, andthere's a never-ending list of
the benefits of AI in anorganization.
I think where systems get intotrouble is they can't see those
benefits when they look at itjust through a risk lens and

(15:08):
they try to ban it, say we'renot going to do AI, we're not
going to allow people to usechat, gpt, we're not going to
allow access to these tools,because anytime you do that as
any executive and especially asa CISO, then inevitably you're
going to have shadow AI, andwhat I mean by that is you're
going to have people in theorganization.
They're going to want to do itbecause they see the value in

(15:30):
how it's going to make their jobeasier.
So they're going to do it.
They're going to find ways todo it, and so they're going to
find ways to circumvent thecontrols that you put in place
and open up additional attackvectors potentially and open up
additional risks that you don'twant.
So I think the most importantthing is to go back to that
understanding of theorganization and the mission and

(15:52):
talking to stakeholders abouthow do you want to use AI?
How have you as an organizationidentified uses for AI?
Now let's have a talk about therisks associated with that and
then how do we put controls inplace to enable you to use AI.
It's similar to the discussionwe've always had about personal

(16:12):
email and some of those otherthings that potentially
introduce risk into theorganization.
The response is always well,you can't do personal email.
Let's talk about yes, you can,but here's the way you do it
responsibly.
Our pre-discussion aroundresponsible and that's, I think,
the most important word when itcomes to AI is how do we do it

(16:32):
responsibly?
And we look at it through asecurity lens.

Pamela Isom (16:37):
I like that.
I think I heard you say be morecollaborative and be more
transparent.
I said you want organizationsand the CISOs and the
organizations to be morecollaborative with the
stakeholders, be moretransparent, more collaborative
with the stakeholders, be moretransparent and include them in

(16:57):
the decision making process asfar as how we use AI, and I
think I also heard you say whichI agree with.
I also heard you say let's bemore open about discussing the
risks.
So let's discuss the risks,let's discuss the concerns and
let's come up together with anapproach for adoption of AI
within the organization.
I'll tend to process whatyou're saying and then translate

(17:19):
it in my own speak, so you justtell me if I'm putting words in
your mouth.
But that's what I heard and Ithink that those are excellent
points.
Did I miss anything in what youwere saying there?

Greg Sisson (17:31):
No, I think you captured it.
And the other thing that Ithink we have to go back and
look at we talked about goingback and looking at data loss
prevention.
I think we need to take a lookat all of our policies to make
sure that we've now addressed AIand the use of AI in all of
those policies, because there'spolicy implications throughout

(17:52):
the organization and especiallyaround training and awareness.
I think that employees all theymost look at what's the benefit
how am I going to be able to domy job better and they don't
necessarily look at the risks.
They don't necessarilyunderstand how an adversary can
now use this to exploitvulnerabilities or take

(18:12):
advantage of something andintroduce risk, and so I think
training and awareness is a bigdeal and having somebody like
you that has a background in AI,that has expertise in this area
, in an organization to workwith when you're developing your
cybersecurity training andawareness, because training is
so important.

(18:33):
I was just reading an articlethis morning and hopefully it
doesn't take us off track butaround what the states are doing
to prepare for the elections,and I read there was an article
in CyberScoop around whatArizona and Minnesota are doing
to train and exercise theirelection officials to make them
aware of how to identify deepfake videos, how to identify
social engineering.

(18:54):
That's being done usingartificial intelligence and I
think that's I applaud those twostates.
I hope other states are doingit and I hope organizations are
doing it, because it's going totake training and exercises to
get people to be aware of whatthe risks really are and how
they can still benefit from it,at the same time, protect the

(19:14):
organization and protect theirpersonal information.

Pamela Isom (19:18):
Yeah, I read that article too and I was actually
happy to see it and to know thatthat's going on.
And I also felt good seeingthat, because that's part of
what I'm trying to do with mytraining programs is, I blend AI
and cybersecurity together, andthat's on purpose, because they
go hand in hand, just like data.
There's AI, there's cyber,there's data.

(19:40):
They all go hand in hand, andright in the middle of that is
human beings.
So in the training programsthat I have, we go over these
things.
Yesterday I was in a discussion.
Greg and I was talking topeople about deep fakes and how
they aren't always bad.
If I'm using a deep fake, thattranslates something that I've
said for an internationalaudience.
That's a good deep fake example.

(20:00):
But we always hear about thescary stuff.
So I try to help folksunderstand the benefits and the
risks, as you talked about,because there are benefits but
we can get overwhelmed andinundated with the scary stuff.
But there's good and there'salso not so good, and people are
just people.
If people can misuse something,we're going to do it.

(20:24):
So the wheat and the tares theygrow together.
That's what I was brought up.
Knowing is the wheat and thetares they grow together, so we
have to have stewards in the mixto help keep everything on
track, and that's why governanceis so important.
That's why I teach ethics,that's why we do these things,
to help us understand and theway to understand what the

(20:45):
states are doing and what wewere just talking about.
It's transparency.
It's full transparency, becauseif I go to a site, I remember
this week I was getting ready topay for something and you
should have saw me looking forways to check online to make
sure this is an authentic site.
I wanted to be sure and we haveto do that.

(21:09):
We just have to do that becauseof the bad use cases of AI and
the harmful bodies.
So I think that those are goodexamples.
This morning, I was readingabout a good use case of AI
which had to do with usingdrones, and I know we talked
about this when we were atEnergy.
But we're talking about usingdrones to assess the critical

(21:33):
infrastructure and the health ofthe infrastructure, and one of
the cities in California ismoving forward with that and
they're using the drones.
Well, you know, behind that isartificial intelligence, so you
know that when I looked at that,I thought, ok, this is good.
This is good.
This is computer vision andusing the computer vision tools

(21:53):
and capabilities, which is agreat use case.
But you know, I thought aboutthe security part, so they just
go hand in hand.
So I don't know if I know we'redealing with critical
infrastructure and CI criticalinfrastructure.
Energy is one of those criticalinfrastructure components.
So it made me think about that.
What do you think about thedrones?

Greg Sisson (22:14):
Yeah, I think you were spot on.
Drones are extremely valuable,especially from a physical
security safety.
Safety is huge.
I mean, why put a human on alarge transmission line, long
transmission lines on a 5G toweron a transmission line, down
hundreds of miles oftransmission lines through
country that you don't wantpeople driving on?

(22:37):
So, certainly from a safetyperspective in energy, it's huge
From a physical securityperspective being able to
inspect substations to look forbreaches in physical security,

(22:58):
to watch for physical securitythreats.
We've had a number of instancesaround the country where we had
people shooting at substations.
So we can use drones and otherautonomous means vehicles to do
that to protect those assetswithout having to put humans
there.
I think it's amazing Other waysto use artificial intelligence
I mean we're seeing a lot ofenergy companies and other

(23:20):
thinking of new ways to usemachine learning and artificial
intelligence algorithms to doall sorts of things in energy
around resiliency, reliability,understanding vulnerabilities in
the electrical grid and thosekinds of things.
So, yeah, there's huge benefitsto the energy sector as well as
other critical infrastructuresectors things.

Pamela Isom (23:38):
So, yeah, there's huge benefits to the energy
sector as well as other criticalinfrastructure sectors.
Yeah, okay, so we were talkingabout governance, risk and
compliance, grc earlier, and soI just want to probe a little
bit more or discuss a little bitmore how we see those programs
evolving, particularly, I knowwe talked about the CISO and the

(23:59):
CISO's role and responsibility,but what about the GRC programs
?
Do you think that they areevolving at the pace that they
should be in this AI era, orwhat's your take?

Greg Sisson (24:11):
This as well.
I mean, it's reallyorganization dependent.
It's how much the leaders inthe organization want to take
the initiative to make sure thatthey're using artificial
intelligence to advance theirgovernance, risk and compliance
programs.
I would absolutely use AI andGPT generative AI tools to write
policies, and when I say that Idon't mean to write the policy

(24:33):
and then take it straight out ofthe tool and issue it to an
organization, but certainly Iwould have loved to have had
those kinds of tools to at leastdraft the policies and then
give it to a team of people.
At least give them a head start, give them a warm start to
developing a policy.
I mean it's amazing, I've hadsome people giving me examples
of policies that were generatedfrom an AI-based tool and

(24:57):
they're pretty darn well done.
They don't take a lot to thentake your experts and
understanding of your missionand how that policy applies to
your organization and makingsome small tweaks to it.
What an amazing savings in timeand resources and being able to
use that to do policygeneration.
So I think, from a policyperspective in that part of GRC,

(25:19):
I think it's amazing and Ithink we should all take
advantage of it from thatperspective, I think the other
thing that we talked aboutearlier was just going back.
I think that it's very importantto go back and review your
governance process and yourcompliance process and those
things Training and awarenessfalls underneath there as well,
which we talked about.
But I think it's just importantfor everybody to go back and

(25:39):
review all of those things toreally understand whether or not
they cover all of the risks andthe benefits that artificial
intelligence introduces to theorganization.

Pamela Isom (25:50):
I agree.
I think that the continuousmonitoring of any material that
is generated for us via AItechnologies.
There needs to be humans in theloop, and I'm going to come
back to that, but there needs tobe the humans in the loop, and
not only that, but thecontinuous monitoring because of

(26:12):
the fact that situations change.
So a policy that's effective,it's reviewed by the ethics team
, for instance, for ethicalassurances, and the risk
management teams for riskmitigation purposes.
That's a good thing, but thatshould be a part of that process
.
The problem with that is I'mgetting complaints that humans

(26:32):
slow things down.
Those are discussions that I'vebeen involved with.
I've literally been in debates,right, because people are like
humans are starting to slowthings down.
What I always say is that may betrue, humans are also biased.
Not everyone is introducingharmful bias, and that's why we
talk about diversity of opinions, the diversified stakeholders,

(26:53):
diversity of opinions, diversityof perspectives, diversity of
opinions, diversity ofperspectives.
So I think it's important tobreak the collateral that's
generated by AI into groups anddetermine what use cases make
the most sense for AI.
I think a policy like what youdescribed is a good example,
because it just literally helpswith the development of the

(27:15):
policies and then you just havethose checks and balances in
place.
But I tell you, I was in aninteresting discussion this week
where one of the people werejust saying as long as we keep
humans in the middle of it, it'sgoing to slow things down,
because humans are biased inthemselves, and I again
re-emphasize the need for it.
That's why you have diversityof opinions, diversity of

(27:37):
perspectives.
I even go back to the examplethat we mentioned earlier about
the automation of the dronesthat are looking at the utility
lines and the fencing and theperimeters and everything for
safety to keep humans out ofharm's way.
There still needs to be a humanin there somewhere, and my
response to that person was Iknow that we still need to get

(28:00):
to the place where automation is, where there's minimum human
interjection, but there has tobe for many reasons.
If the AI is building the rulesfor itself, if it's starting to
decide on itself what needs tobe done, humans need to be able
to shut that thing down if itgoes off.

(28:20):
So you cannot take humans outof the loop in its entirety.
But those are discussions thatwe're starting to have because
there's concerns.

Greg Sisson (28:29):
I'm so glad you teed this up, because I did want
to go back to the human elementand I was so glad when I was
reading the policies and thingsthat are coming out from the
White House, that are coming outfrom the Department of Energy
and talking about responsibleuse.
They're talking aboutresponsible AI, they're talking
about the development of theselearning models and you were
talking about biases.
I mean, unfortunately, thosebiases can be taken into

(28:52):
learning models.
The artificial intelligencestarts to develop those biases,
starts to develop those biases,and so I think it's so important
that we look at this across avery diverse spectrum, that we
really understand the roots ofhow AI performs and how it
learns, and making sure that wehave very, very strict controls
and regulatory oversight intothe development of those

(29:13):
learning models so that we don'tallow the wrong biases to be
interjected into those and thenwe pay for that later on down
the line and I may be mixing upsomething, but you can probably
expand on that a little bit butI think the human element part
of the training of thoselearning models is, and the
biases that could be introducedis an important discussion.

Pamela Isom (29:34):
It is.
You got it the easiest thing todo and it's not easy, by the
way, but the moststraightforward thing to do is
to understand where you wantsecond and third opinions,
second and third points of view.
So, independent assessments,secondary points of view,
secondary perspectives that iswhat we need, because that

(29:56):
independent evaluation helpswith the understanding.
Everyone has biases.
The independent perspectivesays, well, did you consider
diverse situations?
Did you consider this, did youconsider that?
So that brings that perspectiveand you can automate that.
You can automate it.
The thing for us to do is to bestewards about when does

(30:18):
automation make sense?
Also, be good stewards about AI.
What use cases make the mostsense for AI?
So, what are those riskcategories?
So, starting to get anunderstanding from organization
perspective, what are the riskcategories for AI use cases?
What makes the most sense?
What use cases fit within thoserisk categories for my business

(30:40):
?
And so that's what we do, right, we're trying to help
organizations understand this.
You've got to understand that.
And, speaking of that, I waslooking at the report that was
published by the Department ofEnergy and they speak to
different risk categories.
So they did this AI summaryassessment report.

(31:00):
They created this report, whichis based on the executive order
on artificial intelligence.
They have four risk categoriesand one of the risk categories
was compromise of the AIsoftware supply chain.
This sounds like you Becausewhen we work together at Energy
and I know it's been a minute,but I remember things- I
remember good experiences.

(31:20):
So we always talked about thesupply chain and the supply
chain risk management.
I'm very particular about thedata and the supply chain source
models.
That's a supply chain, that's apart of that supply chain

(31:40):
that's very vulnerable.
So I noticed that they had thisrisk category number four
compromise of the AI softwaresupply chain and what they look
at in addition to the supplychain is the critical
infrastructure and cybersecurityrisk associated with critical
infrastructure and thecybersecurity risks associated
with critical infrastructurelike energy.
I wanted to know and get yourperspective on the executive

(32:04):
order and then that reportwhat's your take on the supply
chain management in the era ofAI and cybersecurity?

Greg Sisson (32:14):
I think it's analogous to the work that we're
doing around a software bill ofmaterials, around SBOM and just
really understanding.
I mean it goes back to thedevelopment of these language
models and those sorts of thingsand how we're protecting those
language models as it goes tothe supply chain and the
development chain and how are weprotecting those from being
compromised.
One thing in a language model,but if an adversary is able to

(32:37):
access that language model andchange it from what it was
originally intended, then we'vegot some serious issues from a
risk and safety perspective.
I think it goes back tosoftware bill of materials.
If it's a bill of materials,it's a supply chain risk
mitigation around the languagemodels and how these AI tools
are developed and how we'reputting controls in place to

(32:58):
make sure that what it wasintended for when it was
released by the developer, asit's taken through the
development lifecycle andactually put into the
procurement lifecycle, how arewe protecting those critical
elements to make sure thatthey're not changed in any way
for malicious purposes?
That's my perspective on it.
I mean, there's much more tocome from a supply chain

(33:18):
perspective, but I do thinkthat's the most important part
of it and that's myunderstanding of it.

Pamela Isom (33:23):
And one of the things and I agree with you, I'm
with you a thousand percent.
So one of the things I'mthinking about as we're talking
about this is I have solar in myhome, so I have the traditional
solar panels on my home and wehave a device where I can
monitor the consumption.
And how much were you pullingfrom the grid versus how much

(33:44):
we're pushing to the grid In theday and time of AI?
There's some AI in theregetting me this real time
information and makingpredictions for us and how to
better do some things, so thatwe're pulling more from the sun
and less from the grid.
And so, in the day and time ofdigital transformation,
cybersecurity, adversarial AIthe adversary could look at

(34:09):
exploiting my process in orderto get to many user accounts at
the utility provider, and so Ithink these are the types of
things that one might think isway out there and unlikely, but
that we have to guard against.
And so I think, even withoutthinking about it, if you go

(34:31):
back to the cybersecurityfundamentals, be careful about
your passwords, be careful aboutwhere you're leaving
information.
Cybersecurity, basic hygienewill help protect instances like
that, but in the day and timeof AI, this type of information
is vulnerable.
I think it's a part of thethreat vectors for adversarial

(34:52):
AI, so just wanted to add that.
Yeah, you're spot on wanted toadd that, yeah, you're spot on.
Okay, great, yeah, keep mestraight.
Okay.
So now I want to know.
We're in the portion of thediscussion where we're about to
wrap up.
I want to know, first of all isthere anything else that you
wanted to talk about to discuss?

(35:13):
So let me ask you that first.

Greg Sisson (35:16):
I don't want to get into.
So there is one other point.
So I don't want to get into thescary part of AI, but I do think
that organizations it's part oftheir training and awareness
for their workforce is to reallyunderstand how an adversary can
use AI, because it is important.
I think that's an important wayto train your people to
understand the risks that theycould introduce to the

(35:38):
organization by making themunderstand how an adversary
could use AI, especially when itcomes to using artificial
intelligence to do open sourceintelligence gathering and
social engineering, and how muchmore effective a phishing
attack could be if an adversaryused AI tools to do that social
engineering and that campaigndevelopment for phishing.

(36:00):
So I think that's important forpeople to understand.
And it's also important goingback to the discussion around
training people around AI andhow deepfake videos can erode
trust and how people inside theorganization can recognize
malicious deepfake videos andhow those videos could be used
to impersonate not onlypolitical figures in the country

(36:20):
but also executives and otherpeople inside their organization
to get them to do somethingmalicious like transfer money or
do things like that.
So I just think the adversarialpart of AI and the threat part
of it is very important forpeople to integrate into their
training and awareness programfor their organizations.
So that was the last pointimportant for people to
integrate into their trainingand awareness program for their
organizations.

Pamela Isom (36:44):
So that was the last point.
I think that's an excellentpoint.
I'm going to reach out to yououtside of this and share some
things with you as far as whatI'm doing with my training
programs, so you can see theadversarial components, because,
you're right, you want tobalance the good and the
adversarial components, but youdon't want people to be in the
dark.
So I'm going to get with youand share a few things with you

(37:04):
and then let's see I'm going tosee if you have some
perspectives, because I know youdo.
As we wrap up here, do you haveany words of wisdom or
experiences that you'd like toshare with the listeners?

Greg Sisson (37:17):
I do and I thought about this a little bit.
I think it's just really thebasics that we kicked off with.
It's knowing your mission,knowing the organization,
understanding the benefits andthe risks, communicating those
to the people in theorganization and working with
stakeholders and collaboratingwith stakeholders.
This is a common thing that aretalked about with security
professionals is don't be theoffice of no N-O, but instead be

(37:41):
the office of K-N-O-W, Be theoffice of no.
Take the time to educateyourself and your staff on how
artificial intelligence works,how it can be used for good, but
also understand it from thesame perspective of how it can
introduce risks and thencommunicate those and train the
people in the organization.
But I think that's my partingwords.

Pamela Isom (38:04):
That's pretty cool.
So don't be the opposite of no,but be the opposite of K-N-O-W
and make sure you staycollaborative.
That is just powerful, and I dothink that I really appreciate
the fact that you pointed outthat we want to focus on also
the benefits and don't make it atool that is not permitted

(38:24):
within the organization, becauseyou see the stats, I see the
stats out there, people areusing generative AI.
There's numbers that came outthis morning.
Even organizations and workerswithin organizations are using
AI, whether the leaders approveit or not, even if they have to
put it on their personal devices, but they're using it.
And how are you going to retainstaff if you're blocking the

(38:46):
use of tools that they need touse?
I appreciate it.
I really want to thank you fortaking the time to talk to me
today and for participating inthis podcast effort and for all
your support that you haveprovided and that you continue
to provide.
I appreciate you very much, so,and I want to thank you for

(39:07):
just being here.

Greg Sisson (39:08):
Appreciate you too , and our friendship goes back a
number of years, and so thehelp you gave me when we were
thinking about starting acompany.
I absolutely appreciate yourguidance and your wisdom, and
thanks for inviting me today.
I appreciate it.
I enjoyed it.
The time flew, so it must havebeen a good conversation.

Pamela Isom (39:24):
A good conversation .
Advertise With Us

Popular Podcasts

Amy Robach & T.J. Holmes present: Aubrey O’Day, Covering the Diddy Trial

Amy Robach & T.J. Holmes present: Aubrey O’Day, Covering the Diddy Trial

Introducing… Aubrey O’Day Diddy’s former protege, television personality, platinum selling music artist, Danity Kane alum Aubrey O’Day joins veteran journalists Amy Robach and TJ Holmes to provide a unique perspective on the trial that has captivated the attention of the nation. Join them throughout the trial as they discuss, debate, and dissect every detail, every aspect of the proceedings. Aubrey will offer her opinions and expertise, as only she is qualified to do given her first-hand knowledge. From her days on Making the Band, as she emerged as the breakout star, the truth of the situation would be the opposite of the glitz and glamour. Listen throughout every minute of the trial, for this exclusive coverage. Amy Robach and TJ Holmes present Aubrey O’Day, Covering the Diddy Trial, an iHeartRadio podcast.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Breakfast Club

The Breakfast Club

The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy And Charlamagne Tha God!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.