All Episodes

May 30, 2023 25 mins

In this episode, Rod Schatz (Data & Digital Transformation Executive) and Andreas Welsch discuss how leaders can prepare their business for AI-generated disinformation. Rod shares his perspective on the risks of generative AI for disinformation and provides valuable advice for listeners looking to raise awareness within their organization.

Key topics:
- Determine how generative AI will contribute to digital disinformation
- Develop strategic responses in a changing digital landscape
- Establish a robust disinformation resilience framework

Listen to the full episode to hear how you can:
- Build AI literacy within your organization
- Balance corporate goals and societal responsibility
- Create an AI safety council and preparedness plan

Watch this episode on YouTube:
https://youtu.be/HG4aIN3_GOM

Questions or suggestions? Send me a Text Message.

Support the show

***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


Level up your AI Leadership game with the AI Leadership Handbook:
https://www.aileadershiphandbook.com

More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Andreas Welsch (00:00):
Today, we'll talk about preparing your
business for AI generateddisinformation information, and
who better to talk to about itthan someone who's got a strong
perspective on that.
Rod Schatz.
Hey Rod.
Thanks for joining.

Rod Schatz (00:12):
Thanks for the invite.
I'm looking forward to thediscussion.

Andreas Welsch (00:15):
Awesome.
Hey, why don't you tell ouraudience a little bit about
yourself, who you are and whatyou do?

Rod Schatz (00:21):
Sure.
So I've been a technologyexecutive for the last 15 years
specializing in data and digitaltransformation.
In 2016, I co-authored a book ondigital transformation that's
allowed me to see the world in atotally different perspective.
So from a digital first mindset,I've also pioneered a couple
startups.

(00:41):
And disinformation is somethingI'm really interested in because
I'm somewhat worried aboutsociety as a whole and where all
this is gonna go.

Andreas Welsch (00:51):
That's awesome.
Hey, it sounds like you'vereally seen a lot in that part
about this information and, allthe potential of AI leading to,
more disinformation is,something I've been looking into
as, as well and been thinkingabout a lot and exploring
lately.
So really timely conversation.
Now, Rod, how about we play alittle game to kick things off?

(01:13):
What do you say?

Rod Schatz (01:13):
Sure.
Sounds good.

Andreas Welsch (01:15):
All right, awesome.
So this game is called In YourOwn Words, and when I hit the
buzzer, the wheels will startspinning and when they stop, you
see a sentence and I'd like youto answer with the first thing
that comes to mind and why, inyour own words.
And to make a little moreinteresting, you'll only have 60

(01:36):
seconds for your answer.
Now, for those of you watching,Also drop your answer in the
chat and why?
Really curious to see what youcome up with now.
Rod, are you ready for, what'sthe buzz?

Rod Schatz (01:48):
I am ready.

Andreas Welsch (01:50):
All right, then let's get started.
If AI were a bird, what would itbe?
60 seconds on the clock.
Go.

Rod Schatz (01:59):
Okay, perfect.
It would be an eagle.
And why an eagle?
Cuz eagles soar.
They get to see a broadperspective of the landscape.
They're sharp, they can hunt,and ultimately an eagle is a top
of the food chain.
And I see AI as being one ofthose things that's gonna
quickly move up into ourcorporate food chain and harkey
in organizations to be allpowerful.

(02:20):
And that's what I see as aneagle.

Andreas Welsch (02:23):
Fantastic.
Within time and great answer.
Thank you so much.
I'm taking a look at the chathere real quick to see where
folks are joining us from.
So Asif from Dallas, Texas.
Michael in Boston, Mary inChicago, Elly in Houston.
So for now seems like a like apredominantly US based audience,
but I'm curious if you'rejoining from anywhere else in,

(02:45):
in the world, please drop it inthe chat as well.
So Eagle is this majestic bird,this majestic animal.
And if you ever seen any livethey're, quite impressive,
right?
I think Eagle also gets us toour first question and

(03:06):
especially if we look atsomething like generative AI and
here in the US the eagle being akey symbol and a key animal here
as well.
Now looking at generative AI,I'm wondering if we believe that
it will contribute to thedisinformation megaphone and to
what extent, and if you'veactually seen some current

(03:27):
examples, and again, maybegovernment and the eagle.
Have, their first play here.

Rod Schatz (03:34):
Yeah, sure.
So I personally think we're at abit of a precarious spot.
And what I mean by that isgovernments to date haven't done
a good job of regulating bigtech, in particular social
media.
And I think generative AI isreally exposing some of those
flaws in lack of regulation.

(03:55):
And the key thing where I'mgoing with that is the data
that's used for generative AI isfull a lot of biases.
And the thing that concerns methe most is how those bad actors
are gonna exploit that largetraining data set to use it for
disinformation or harm.
And I think we really have tobreak it down into two aspects.

(04:15):
One is there's the personalaspect where generative AI can
do.
Damage and create disinformationfor a person, but also for
brands, large corporations ormedium-sized corporations.
In terms of some of the areaswhere we obviously can see the
biggest impact ofdisinformation, it's gonna be on
politics.
And the other thing I see isthere's a strong likelihood of

(04:39):
social problem amplification.
So for vulnerable populations, Ican see that generative AI can
be used in ways to exploit them.
So in terms of some examples,I'm gonna highlight sort of
three.
It was probably about two monthsago with the looming indictment
of Donald Trump.

(04:59):
There were a bunch of generativeAI images that popped up of him
being arrested in lowerManhattan where he was on the
street and there was a bunch ofpolice officers grabbing him.
And so those images were ahundred percent generative AI
created.
So text to image.
Another one that I bumped into acouple weeks ago was the Turkish

(05:22):
elections.
So the, person trying to beelected over the existing
President.
There was a video that wasreleased of the Challenger and
it was done in English.
And it was a video and wasposted on social media.
And then about a day later heactually came out and said he

(05:43):
didn't create it.
So it was all misinformation.
And it was interesting too thatit was done in English, not in
Turkish.
So that's another example oftext to video.
And then one that popped up onTwitter.
There was an image posted of anexplosion on the Pentagon
grounds in the us and then,Twitter went a storm on this and

(06:06):
the S&P dropped substantially.
And then about 15 minutes laterit was reported as a deep fake
and then things started torebound.
But those three examples showhow easy it is to create
disinformation with these newtools.
And the thing that I'm findingfascinating is we're in the
really early days of thistechnology.
ChatGPT's been out sinceNovember; other iterations of it

(06:29):
a little longer, but the thingthat I'm having a hard time as a
technologist is keeping pace.

Andreas Welsch (06:35):
That's right.
Yeah.
It evolves so quickly, right?
And there are so many news, somany improvements and anything
also with some things beingavailable open source or you
read about these examples ofpeople training models on their
gaming laptop.
So they become really small,really resource efficient.
You no longer need to have hugeinfrastructures and almost like

(06:55):
super computers to train thesethings.
I think there is a there's asignificant risk in it.
And what do you think we need todo individually to identify
whether something is realinformation.
What role do we play in playindividually in this?

Rod Schatz (07:15):
I think the big thing is education.
And one of the things I think alot about is my children is, How
do we know what to trust and howdo we know not what not to
trust?
And so what that ultimatelycomes down to is really solid
fact checking.
And that's what I mean byeducation is I think we all need
to now develop new skills, whichis to evaluate the content that

(07:38):
we read.
We see, we hear, and evaluatewhether or not it feels
trustworthy.
So I think that's one aspect weall collectively need to do.
Is that real education piece todevelop, like I said, those fact
checking analytical skills.

Andreas Welsch (07:56):
I think that's a very important point.
And for me, whether you saymedia has always been divided or
catered to a certain part of thepopulation or what you believe.
I feel as independent media,there's always been the sense of
trust and objectivity that youput into these organizations.

(08:16):
So if it becomes harder todiscern whether something is
real or not, or if you first ofall need to ask yourself that
question is that plausible thatthis has happened?
And then do your own factchecking.
I think there's a huge changeand a huge risk in itself as

(08:37):
well and also questioning a bitour our trust in these
organizations as independent oras supposedly independent.
Now I'm also wondering as we'relooking at this area of
AI-generated this informationand how might actually that the

(08:57):
digital landscape reshape thecompetitive dynamics, especially
of media, of politics, ofbusinesses, and what do you
recommend?
What are strategic responsesthat are required so that
businesses and individuals andleaders can navigate these
challenges?

Rod Schatz (09:15):
Yeah.
I think step one is education.
Like I said, it not educationfrom the perspective of trust
and understanding how tounderstand if the information is
trustworthy.
But it's education in terms ofunderstanding the tech.
So one of the things that I'vebumped into in my career is
there's a massive digital dividebetween executives that run
organizations and theirunderstanding of technology that

(09:38):
supports them and technologythat is out there to help them
grow and transform.
So leaders really need to learnand they need to learn.
They need to lead with how thistechnology can help them.
The premise of the book that Ico-authored back in 2016 was
around how digitaltransformation disrupts business

(09:58):
models.
Up until November, most peoplethought I was a doom and gloom.
Of all the things I would say ofthis is things that I can
predict are gonna happen to yourindustry.
With generative ai, we're gonnasee the pace of change in
industries.
Day-to-day jobs is fundamentallygonna change.
So I would argue a lot oftraditional business models are

(10:20):
at threat.
So what that ultimately means,it's the perfect storm for
organizations.
So organizations now need toprepare and build strategies,
and the strategy has to be astrategy that is led to
execution, not one that cultureeats strategy for breakfast.
So it truly has to be one wherethere's a true implementation

(10:43):
plan and as it relates todisinformation.
The other thing that I've beenthinking a lot about is.
Organizations need to developjust like they have tabletop
exercises for cybersecuritybreaches, organizations now need
to develop disinformationbreaches.
So it's another thing that Idefinitely think they need to

(11:05):
prepare for.
And what I mean by that is goingback to my earlier point,
there's gonna be two aspects ofdisinformation.
One will be at a personal level.
So think of a CEO or CFO.
And the other could be at anorganizational level.
So a competitor createsdisinformation related to
something, to a top competitor,to basically erode trust.

(11:28):
So I think there's a bunch ofthings that organizations need
to do.
A further one that I think theneeds to happen is, for a lack
of a better term, I'm calling itan AI safety council.
Organizations need to develop anAI safety council, which is
basically taking on all theethical components of it, but
it's also looking at how to dealwith all the risk.

(11:52):
And so there's obviously policyacceptable use both internally
and externally of how to usethese tools.
There's the data protection.
One thing I think is gonnabecome very powerful is
organizations that have strongstances on how they use
generative AI related tocustomers.
And that I think is gonnaseparate many firms from the

(12:13):
pack.
So in other words,responsibility.

Andreas Welsch (12:17):
To me, the part that you mentioned about how,
easy it is to create this kindof information or disinformation
and, how others might use it isreally something that we need to
create more awareness we're andI'm wondering if leaders are
prepared for it.
Even if our friends in PRdepartments are prepared for it

(12:40):
to react to such in information.
So to your point, I thinkthere's a lot more awareness
that needs to be created.
That this is possible and thatthis might eventually happen.
Maybe let's take a look at thechat.
I see there's a question fromManjunatha that I'd like to pick
up and pick your brains on.

(13:01):
So Manjunatha asks given thereality of generative AI
disinformation, how can youminimize the human cost and
broad role?
Does generative AI play hereitself?
Can you think of a parallel toself-regulation here, and what
what does it take to do the factchecking for generative AI?

Rod Schatz (13:23):
So those are two big questions.
So the human cost, I think thisis one area where we're gonna
see a lot of disruption.
So corporations exist for onereason, that's to make a profit.
And having sat at many executivetables, I know how those
discussions transpire.
And if generative AI can beused, which it can be to lead to

(13:46):
automation.
If you can go from a marketingteam of 20 down to six, but
still produce the same output,if not, or better, I'm pretty
sure, I know many organizationsthat would say, then let's
downsize.
I was listening to an HBRpodcast yesterday and one of the

(14:08):
panelists was talking about aconversation with a CFO who was
actually talking about this verytopic.
He was weighing if the marketingdepartment or even his finance
team is gonna get smaller.
Do I make the right decision,which is to the good of society?
Do I make the corporatedecision, which is to let people
go?
I think that's definitely aquestion that all organizations
need to start to discussinternally and come up with what

(14:31):
their game plan is.
What we've seen historicallythough, is we've gone through
massive waves of innovationthroughout the history of
humankind.
And what's happened is peopleget displaced for a short
timeframe, but then they go offand learn new skills and they're
redeployed elsewhere.
So I think that's largely whatorganizations need to start
doing is having some of thoseconversations.

(14:52):
Now, the second question, canyou compare on self-regulation?
I don't think self-regulation isgonna work.
The example I used earlier ofsocial media.
For the most part, the socialmedia companies haven't had a
lot of really stringentregulation on them.
And you could argue the US'political elections have been
influenced by disinformation onthose platforms.

(15:15):
And so again, coming back to whydo corporations exist, it's for
profit and profit toshareholders.
So that's always gonna guidetheir direction no matter what.
So I do think regulation needsto be put in place, but I don't
think it's as easy as justgovernment.
I think it has to bemultifaceted.
I think that governmentobviously has a big role on it.

(15:37):
I was thinking about thisyesterday.
I read an article that talkedabout having a UN treaty across
all nations around what are theguiding principles of
responsible AI.
And then it also got methinking.
Industry associations also playa major role in this.
So think of the medicalprofession.
They have industry associations,lawyers, engineers, software.

(15:57):
I think industry associationsalso need to jump in and develop
their own AI safety councils onhow to deal with some of this
stuff.
It's one of the things I learnedin the book is when disruption
comes, the perception that isout of the blue, organizations
don't know how to respond and sotheir response is slow.
And then that leads toultimately them turning into
Barnes and Noble, which meansthey disappear.

(16:20):
So I think it has to bemultifaceted.

Andreas Welsch (16:25):
Great.
So maybe, from there on youmentioned certainly we can
expect some changes inorganizations.
But maybe coming back to thatdisinformation topic again.
How do you feel thatorganizations can establish a
robust resilience framework tocombat disinformation, to
identify it, to react to it and,one that not only safeguards the

(16:49):
reputation, but one that alsopromotes ethical AI as a
competitive advantage?

Rod Schatz (16:54):
I think the first thing that organizations need to
do is they need to monitor theinternet and social media for
their brand.
So they need to see if anybody'sputting out disinformation
related to their given services,their products.
So they need to be on top of it.
The key is you don't want to bereactionary.
And to use the cybersecurityexample as well the struggle

(17:16):
with cybersecurity is it'stypically reactionary and you're
always from a position ofdisadvantage.
And with disinformation, Ireally think organizations need
to be on top of it.
I think organizations need tohave a policy and a set of a
framework.
If it happens, what do we do?
What are our 14 steps?
Who's involved?

(17:37):
And the key thing withdisinformation is, Not one group
in an organization owns it.
It's part marketing, it's partlegal, it's part tech, it's part
executive team.
And because of that, it can bevery chaotic if it happens.
The organization I waspreviously at, there was an
incident related to one of ouremployees and we didn't have a

(17:58):
good plan and we scrambled.
And so that's why I'm saying thefirst step is being proactive.
So it's having good internaldiscussions.
It's laying out how you're gonnarespond and what your response
will be.
The next thing is related to theuse.
Internal use is having center ofexcellence.
So we've seen lots ofinformation and posts on

(18:21):
LinkedIn and other social mediaplatforms around.
You have to be careful what sortof information you put inside of
the generative AI engines.
So those engines themselves mayuse that information for
retraining, so that retrainingitself, if taken outta context,
could also be disinformationagainst an organization.
So as having some of those sortof policies and education

(18:43):
internally, I talked about AIsafety council.
I also think it's also abouthaving an adaptive culture.
So some of the organizationsI've talked to, many people are
afraid of this technology.
And the key thing that I'velearned over the last four and a
half months is you gotta get inthere and get your fingers
dirty.
Play with it to reallyunderstand it.
And what I mean by that is we'vealso read a lot about this

(19:07):
technology hallucinates.
So how do you know when it'shallucinating?
What I found is you ask at thecore question of what you want,
and then you bombard it withsecondary questions to make sure
it's not giving you ahallucination.
And the big thing on all of thisleadership, the executives
really do need to learn thistechnology and they need to
lead.

(19:28):
And they need to show thatthere's a strategy behind all of
this.

Andreas Welsch (19:34):
Awesome great summary.
All around how to startaddressing AI generated this
information and come up with aplan and devise that plan before
you actually need it.
It's a lot like crisiscommunication and having that
plan together.
So you just pull it out of thedrawer and go according to your
script and your plan if it hashappened.

(19:56):
Now, we earlier talked a bitabout trust and erosion of trust
is as well.
And they're wondering as AIgenerated disinformation really
challenges that the very trustthat we've put into the
institutions that we've put intothe organizations that are
supposed to be neutral in manyways are and the overall digital

(20:17):
ecosystem.
What are maybe some of thepartnerships that you see or
collaborative efforts thatcompanies and industries and
governments can forge toreinforce that integrity of
information and make sure thatwe have a reliable and secure
future.
What are you seeing there?

Rod Schatz (20:33):
I don't think just government regulation is the
solution to this.
I do like the term you use forpartnership.
I do think that the techecosystem itself is definitely
part of the solution.
One of the things that I've beenthinking about for the last
couple years is related to trustand disinformation.
I have a good friend, and himand I debate this all the time.

(20:53):
We really need a trust platform.
So if I'm gonna post somethingon Twitter, it goes through
basically this trust platformwhere I have a rating of my
trustworthiness has been X on mylast posts.
And then the power of the crowdgets to rank whether or not they
feel my story is trustworthy.

(21:15):
And so think of who wants to bea millionaire?
The power of the crowd wasalways in the high nineties in
terms of their.
Their ability to get the rightanswer.
And I think that trust platform,if it was blockchain BA based,
also it's immutable.
It also helps allow us tounderstand what is coming from
credible sources.

(21:35):
So the example yesterday of thatPentagon picture.
If it was on a trust platform,that person most likely has a
low trustworthy score.
So then people would've lookedat it and went, oh, okay.
So then the news media wouldn'thave been so quick to post and
jump on it.
So I do think there's definitelysome tech in this is where I'm
going with this that has to bebuilt to also help us around

(21:58):
trustworthiness.
And additionally, there are somecompanies that are working on
deep fake detectors.
So one of the things that we sawpop up quite quickly after
ChatGPT was released was adetector on whether or not text
was written by ChatGPT.
So, I think it definitely is apartnership.
I also mentioned industryassociations.

(22:20):
I think they have a big role toplay.
And then the key thing I thinkwe need to do as society is we
need to measure success on allof this stuff.
So regulatory typicallyimplemented governments aren't
good at measuring KPIs, thatkind of stuff.
And I definitely think that'ssomething we need to implement
is how do we measure success?

(22:41):
Cuz we're right now reading lotsof doom and gloom about humanity
and humanity's ability tosurvive AI.
The only way we're really gonnabe able to do that is to measure
what we've put in place and seehow it's working and continuous
improvement.

Andreas Welsch (22:56):
No that's, awesome.
I think that makes a lot ofsense.
We are getting close to the endof the show and I was wondering
if you can summarize the keythree takeaways for our audience
today.
And I know we've had a lot ofquestions in the chat as well
and been able to take some, butwhat are the key three takeaways
for our audience?

Rod Schatz (23:12):
I think takeaway number one is disinformation is
not going anywhere.
It's gonna get worse, ifanything.
I do feel that all of us asindividuals and as
organizations, the companies wework for, we need to be prepared
for this.
So we all need to start todevelop our own proactive
approach to how to do this.
And the one thing I really wantto emphasize too is I really

(23:36):
think that AI safety council isthe starting point for a lot of
organizations.
It's the ability to defend whendisinformation is out there
related to a brand, individualor personal, but it's also the
ability to manage and use thesetools effectively to help
organizations grow and foster,but also to help keep us

(23:57):
employed as humans.

Andreas Welsch (24:00):
Awesome.
Thank you so much Rod, thanksfor joining us and for sharing
your expertise with us and forthose in the audience for
learning with us.

Rod Schatz (24:08):
Bye-bye.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.